Deprecated: Function get_magic_quotes_gpc() is deprecated in /home/customer/www/ufppc.org/public_html/libraries/fof30/Input/Input.php on line 99
United for Peace of Pierce County - BACKGROUND: Blaming the amygdala may be a bit premature
Print

David Ropeik of the Harvard Extension School is one of those who are promoting the idea that the structure of the human brain gives priority to emotion over rational thought, and in particular that the amygdala is instrumental in this.  --  Ropeik's name has appeared in the New York Times more than 800 times, and on Sunday he published another piece in the Sunday Review section, rehearsing the argument that "the basic architecture of the brain ensures that we feel first and thnk second."[1]  --  "The part of the brain where the instinctive “fight or flight” signal is first triggered -- the amygdala -- is situated such that it receives incoming stimuli before the parts of the brain that think things over," he wrote.  --  This idea is far from new, and back in 2010, self-satisfied liberals touted the news that scientists at University College London found that "people with conservative views have brains with larger amygdalas," as the Telegraph reported.[2]  --  In fact, promoting such notionsis something of a cottage industry.  --  Ropeik himself is a former television reporter[3] who has parlayed a specialization in environmental news into a career in risk analysis (he is now "an independent consultant to government, business, trade associations, consumer groups, and educational institutions" who continues to teach at the Harvard Extension School after leaving the Harvard School of Public Health, where he served for a time as director of communications for its Center for Risk Analysis).  --  Ropeik is the author of How Risky It It Really? Why Our Fears Don't Always Match the Facts (McGraw-Hill, 2010) and the co-author of Risk: A Practical Guide for Deciding What's Really Safe and What's Really Dangerous in the World Around You (Mariner Books, 2002).  --  But a British writer thinks that there is less here than meets the eye, warning, in effect, that disease vectors for a wave of "intellectual pestilence" have been released:  "the 'neural' explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena," said a piece in the New Statesman a few weeks back.[4]  --  "Happily," wrote Steven Poole, a British author and journalist, "a new branch of the neuroscience-explains-everything genre may be created at any time by the simple expedient of adding the prefix 'neuro' to whatever you are talking about.  Thus, 'neuroeconomics' is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; 'molecular gastronomy' has now been trumped in the scientized gluttony stakes by 'neurogastronomy'; students of Republican and Democratic brains are doing 'neuropolitics'; literature academics practice 'neurocriticism.'"  --  Poole calls it "self-help armored in hard science," noting that "[i]n a self-congratulatory egalitarian age, you can no longer tell people to improve themselves morally.  So self-improvement is couched in instrumental, scientifically approved terms."  --  The fly in the ointment of all this pseudo-enlightenment is the mistaken assumption that, in the words of Paul Fletcher, professor of health neuroscience at the University of Cambridge, “activity in a brain region is the answer to some profound question about psychological processes."  --  In fact, Poole writes, "That a part of it 'lights up' on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions.  Nor do we have the faintest clue about the biggest mystery of all -- how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph?  How come the brain gives rise to the mind?  No one knows."  --  In the popular literature on brain science, "the great movie-monster of nearly all the pop brain literature is . . . the amygdala.  It is routinely described as the 'ancient' or 'primitive' brain, scarily atavistic.  There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularizers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory." ...

1.

Gray matter

INSIDE THE MIND OF WORRY

By David Ropeik

New York Times

September 30, 2012 (posted Sept. 29)

http://www.nytimes.com/2012/09/30/opinion/sunday/why-smart-brains-make-dumb-decisions-about-danger.html

We make all sorts of ostensibly conscious and seemingly rational choices when we are aware of a potential risk.  We eat organic food, max out on multivitamins and quickly forswear some products (even whole technologies) at the slightest hint of danger.  We carry guns and vote for the candidate we think will keep us safe.  Yet these choices are far from carefully considered -- and, surprisingly often, they contravene reason.  What’s more, while our choices about risk invariably feel right when we make them, many of these decisions end up putting us in greater peril.

Researchers in neuroscience, psychology, economics and other disciplines have made a range of discoveries about why human beings sometimes fear more than the evidence warrants, and sometimes less than the evidence warns.  That science is worth reviewing at length.  But one current issue offers a crash course in the most significant of these findings: the fear of vaccines, particularly vaccines for children.

In a 2011 Thomson Reuters/NPR poll, nearly one parent in three with a child under 18 was worried about vaccines, and roughly one American in four was concerned about the value and safety of vaccines in general.  In the same poll, roughly one out of every five college-educated respondents worried that childhood vaccination was connected with autism; 7 percent said they feared a link with Type 1 diabetes.

Based on the evidence, these and most other concerns about vaccines are unfounded.  A comprehensive report last year from the Institute of Medicine is just one of many studies to report that vaccines do not cause autism, diabetes, asthma or other major afflictions listed by the anti-vaccination movement.

Yet these fears, fierce and visceral, persist.  To frustrated doctors and health officials, vaccine-phobia seems an irrational denial of the facts that puts both the unvaccinated child and the community at greater risk (as herd immunity goes down, disease spread rises).  But the more we learn about how risk perception works, the more understandable -- if still quite dangerous -- the fear of vaccines becomes.

Along with many others, the cognitive psychologists Paul Slovic of the University of Oregon and Baruch Fischhoff of Carnegie Mellon University have identified several reasons something might feel more or less scary than mere reason might suppose . Humans subconsciously weigh the risks and benefits of any choice or course of action -- and if taking a particular action seems to afford little or no benefit, the risk automatically feels bigger.  Vaccinations are a striking example.  As the subconscious mind might view it, vaccines protect children from diseases like measles and pertussis, or whooping cough, that are no longer common, so the benefit to vaccination feels small -- and smaller still, perhaps, compared to even the minuscule risk of a serious side effect. (In actuality, outbreaks of both of these infections have been more common in recent years, according to the Centers for Disease Control and Prevention.)  Contrast this with how people felt in the 1950s, in the frightening days of polio, when parents lined their kids up for vaccines that carried much greater risk than do the modern ones.  The risk felt smaller, because the benefit was abundantly clear.

Professor Slovic and Professor Fischhoff and others have found that a risk imposed upon a person, like mandatory vaccination programs (nearly all of which allow people to opt out), feels scarier than the same risk if taken voluntarily.  Risk perception also depends on trust. A risk created by a source you don’t trust will feel scarier.  The anti-vaccination movement is thick with mistrust of government and the drug industry.  Finally, risks that are human-made, like vaccines, evoke more worry than risks that are natural.  Some parents who refuse to have their kids vaccinated say they are willing to accept the risk of the disease, because the disease is “natural.”

Still, shouldn’t our wonderful powers of reason be able to overcome these instinctive impediments to clear thinking?  The neuroscience of fear makes clear that such hope is hubris.  Work on the neural roots of fear by the neuroscientist Joseph LeDoux of New York University, and others, has found that in the complex interplay of slower, conscious reason and quicker, subconscious emotion and instinct, the basic architecture of the brain ensures that we feel first and think second.  The part of the brain where the instinctive “fight or flight” signal is first triggered -- the amygdala -- is situated such that it receives incoming stimuli before the parts of the brain that think things over.  Then, in our ongoing response to potential peril, the way the brain is built and operates assures that we are likely to feel more and think less . As Professor LeDoux puts it in *The Emotional Brain*:  “the wiring of the brain at this point in our evolutionary history is such that connections from the emotional systems to the cognitive systems are stronger than connections from the cognitive systems to the emotional systems.”

And so we have excessive fear of vaccines.  But just as we are too afraid of some things, this same “feelings and facts” system works the other way too, sometimes leaving us inadequately concerned about bigger risks.  A risky behavior you engage in voluntarily and that seems to afford plenty of benefit -- think sun-tanning for that “nice, healthy glow” -- feels less dangerous.  A societal risk, well off in the future, tends not to trigger the same instinctive alarm -- in part, because the hazard isn’t singling any one of us out, individually.  This helps explain why concern over climate change is broad, but thin.

Though it may be prone to occasional errors, our risk-perception system isn’t all bad.  After all, it has gotten us this far through evolution’s gantlet.  But a system that relies so heavily on emotion and instinct sometimes produces risk perceptions that don’t match the evidence, a “risk perception gap” that can be a risk in itself.  We do have to fear the dangers of fear itself.

In this remarkable era of discovery about how our brains operate, we have discovered a great deal about why the gap occurs, and we can -- and should -- put our detailed knowledge of risk perception to use in narrowing the risk-perception gap and reducing its dangers. As the Italian philosopher Nicola Abbagnano advised, “Reason itself is fallible, and this fallibility must find a place in our logic.”  Accepting that risk perception is not so much a process of pure reason, but rather a subjective combination of the facts and how those facts feel, might be just the step in the human learning curve we need to make.  Then, maybe, we’ll start making smarter decisions about vaccines and other health matters.

--David Ropeik is an instructor at the Harvard Extension School and the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts.


2.

Science

Science news

POLITICAL VIEWS 'HARD-WIRED' INTO YOUR BRAIN

By Richard Alleyne

Telegraph (London)
December 28, 2010

http://www.telegraph.co.uk/science/science-news/8228192/Political-views-hard-wired-into-your-brain.html

Scientists have found that people with conservative views have brains with larger amygdalas, almond shaped areas in the centre of the brain often associated with anxiety and emotions.

On the otherhand, they have a smaller anterior cingulate, an area at the front of the brain associated with courage and looking on the bright side of life.

The "exciting" correlation was found by scientists at University College London who scanned the brains of two members of parliament and a number of students.

They found that the size of the two areas of the brain directly related to the political views of the volunteers.

However as they were all adults it was hard to say whether their brains had been born that way or had developed through experience.

Prof Geraint Rees, who led the research, said:  "We were very surprised to find that there was an area of the brain that we could predict political attitude.

"It is very surprising because it does suggest there is something about political attitude that is encoded in our brain structure through our experience or that there is something in our brain structure that determines or results in political attitude."

Prof. Rees and his team, who carried out the research for the Today programme on BBC Radio 4, looked at the brain make up of the Labour MP Stephen Pound and Alan Duncan, the Conservative Minister of State for International Development using a scanner.

They also questioned a further 90 students, who had already been scanned for other studies, about their political views.

The results, which will be published next year, back up a study that showed that some people were born with a "Liberal Gene" that makes people more likely to seek out less conventional political views.

The gene, a neurotransmitter in the brain called DRD4, could even be stimulated by the novelty value of radical opinions, claimed the researchers at the University of California.

3.

Science

A conversation

THE FEAR FACTOR MEETS ITS MATCH

By Claudia Dreifus

New York Times

December 3, 2002

http://www.nytimes.com/2002/12/03/science/a-conversation-with-david-ropeik-the-fear-factor-meets-its-match.html

CAMBRIDGE, Mass.-- In this world of new occupations, David Ropeik, a former television reporter, is the director of risk communication at the Harvard Center for Risk Analysis.  As a professional ''risk communicator'' for a research group, Mr. Ropeik writes essays, books, and opinion articles about reasons for people's fears, using the tools of statistics, psychology, and evolutionary biology.

With terrorist alerts, threats of war with Iraq and outbreaks of West Nile fever, Americans seem eager to hear someone who can explain why they are afraid and, perhaps more important, whether their fears have reasonable grounds.

Mr. Ropeik (pronounced roh-PEEK) writes essays on risk and reads them on ''Morning Edition,'' on National Public Radio.  A book by Mr. Ropeik and George Gray, '*Risk: A Practical Guide for Deciding What's Really Safe and What's Really Dangerous in the World Around You*, was recently published.

Mr. Ropeik, 51, and his organization are also drawing attention because of their critics, who contend that the center, which is mostly financed by industry, is too closely tied to its sponsors, issuing studies about the products they make.

The criticism even emerged during a Congressional hearing over the confirmation of Dr. John D. Graham, the center's founder, who won approval to head a regulatory department in the White House Office of Management and Budget.

David Ropeik spoke about his roles and the controversies in interviews here and later by telephone.

Q.  Let's begin with basics.  Define risk analysis.

A.  We want to know how big or small a risk is, how expensive various solutions will be, to know if we do something about this risk in this way, what will that do to other risks?  Will it make them go up or down?

Risk analysis is meant is to be thoughtful, rational, informed about complicated, often emotional issues, so that decisions we make are good, smart and informed.

I think there are many examples of where people are more or less fearful than the facts suggest they ought to be.  When people are over- or under-afraid, based on what the statistics suggest they ought to be of any given risk, they make bad choices.

Q.  Give us some examples of what you consider bad choices.

A.  Let's talk specifically about terrorism.  When people are afraid of flying, they drive.  I know of the mother of a United Airlines flight attendant who, in the wake of 9/11, was afraid of flying.  So a few weeks after 9/11, she drove to a family function several states away.  She was killed in a car crash.  She was too afraid of a low risk -- flying -- and her risk perception led her to a choice that was dangerous.

Another example?  People, when they read about high-risk situations, sometimes want to protect themselves by buying guns.  I don't say that's good or bad; that's their choice.  But it's been demonstrated that more guns bought for self-defense will go off in a crime, suicide, or accident than for self-defense.

Q.  Don't people sometimes have very good reasons for making risky choices?

A.  It's entirely rational for us to want to protect ourselves and to try to survive.  If you are walking through the woods and you see something on the ground, something that could be either a snake or a stick, you're not going to do a risk analysis of what is there.  You're going to jump out of the way.

We're biologically programmed to do this, to protect ourselves.  And when you don't have all the facts you will over- or underreact to a risk, based on your instincts.

At the Harvard Center for Risk Analysis, we analyze how big or small a risk might be, how one risk compares to another, and the effectiveness and costs of various risk management strategies, to identify how to maximize risk reduction with the most efficient use of limited resources.

Q.  Do you think that one of the legacies of 9/11 is that we've become an overly fearful society?

A.  All the scientific literature about risk says that we humans react to a new risk with more fear than after we've lived with it for a while.  This was new.  There are, however, many things the government could be doing better.

To stem emotional reactions, it's important that the government be open and communicating with the public.  During holidays lately, we get all these condition orange, condition yellow alerts. What do they mean?  They should tell us what we should be doing in response.  Should we be looking for packages on the street?  Should we be on alert for people scaling fences at reservoirs or chemical plants?

Q.  American airports are now centers of elaborate security rituals.  Do you think the aviation industry is trying to reshape our perception of flying risks, or are we really safer?

A.  I would say both.  You could say all the visible security is an overreaction because it's a lot of money being poured into what many people seem to think is some risk reduction, but probably is minimal.

As the Economist recently put it, it's a bunch of guys in uniforms looking butch, pretending to scare terrorists.  It seems that the flaws are still there and if somebody really wanted to sneak through, he could, like Richard Reid with his bomb-laden sneakers.

But then, if you are afraid of flying, you'll drive, and driving is by far, statistically, a much greater risk -- 41,000 Americans will be killed in motor vehicle crashes in the calendar year coming up, roughly.

If we're less afraid of flying because of the show of confidence that those butch guys in uniforms at the airports are offering, we can make more informed, reasonable decisions that ultimately may reduce our physical risk.

Q.  What is your take on how the government has communicated the potential risks of radioactive ''dirty bombs''?

A.  I think the government fanned our fears with how it described the risk of dirty bombs, rather than helping put the risk in perspective.

The truth is, if a dirty bomb does go off at some point, the terror will be higher than the actual physical damage, and the government should put that into perspective.  There is, after all, a real physical danger from the terror, as well as from the device itself.

Government officials called it a ''weapon of mass destruction,'' which according to all the scientists quoted at the time, it is not.  It is very bad for the neighborhood where it goes off, which is where most of the radiation stays.  It carries little greater physical risks than any conventional explosive.

Q.  On the biological front, we have the West Nile virus.  With birds dropping from the trees and people dying, should we worry?

A.  The spread of West Nile virus perfectly illustrates how risk perception can lead to more fear than the actual risk seems to warrant.  Compare the fear in areas where the virus is just showing up -- pretty high -- with fear of the same virus where it has existed for a few years.  The risk of getting it is the same everywhere, but it's more frightening to people for whom it's new.

Q.  The founder of your center, Dr. Graham, is now the administrator of the OMB's Office of Information and Regulatory Affairs.  During his confirmation hearings, Joan Claybrook, president of Public Citizen, said he was unfit for the job because of ''his history of conducting research that places anti-regulatory policy objectives before academic accuracy and integrity.''  How do your respond to this and other similar accusations?

A.  It was very painful for the faculty at the center, and I speak now on their behalf, to go through this because it questions the credibility of the entire center's science.

Q.  She wasn't the only one to level that sort of accusation.  Robert Kuttner, in the American Prospect, a liberal publication, wrote that Dr. Graham ''has taken loads of self-serving industry money to underwrite his Harvard Center.''  He suggests that the issues the center takes on are very much determined by your financial backers.  How did you react to his critique?

A.  I felt that this fellow had a view on things that came through.  He's a columnist.  He's an analyst.  However, I think the point he raises is right on.  Is that kind of rational cost-benefit thinking going to attract Greenpeace, or the Sierra Club, or National Audubon, or whatever?  Less likely than it's going to attract corporations who find comfort in that careful, non-emotional, non-value-based, but ''just the facts, please,'' sort of approach.

Q.  What do you personally fear?

A.  I'm afraid of being an overweight 51-year-old white guy and not eating well and not getting enough exercise.  I'm afraid of not making the right lifestyle choices.  But, you know, ice cream still tastes good.

I'm afraid of my 17-year-old son starting to drive.  He's in a pretty high-risk group there.  And he's my son.

Most of us are much more afraid of risks to our children than we are to ourselves.  That's why asbestos at our kid's school is much more frightening than it is at the workplace.

Q.  One senses that you live by the FDR adage ''We having nothing to fear but fear itself.''

A.  Yes.  We have to recognize that there are very real risks out there, but one of them is fear.

Photo: How safe are you from terrorists, natural disasters and auto crashes? David Ropeik is the designated answer man. (Rick Friedman for The New York Times)

4.

YOUR BRAIN ON PSEUDOSCIENCE: THE RISE OF POPULAR NEUROBOLLOCKS

By Steven Poole

** The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat? **

New Statesman

September 6, 2012

http://www.newstatesman.com/culture/books/2012/09/your-brain-pseudoscience

An intellectual pestilence is upon us.  Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies.  The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer.  This is the plague of neuroscientism -- a.k.a. neurobabble, neurobollocks, or neurotrash -- and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology.  (Even practicing scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors:  that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism,” Rainy Brain, Sunny Brain, published this summer.)  In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena.  Chris Mooney’s The Republican Brain: the Science of Why They Deny Science -- and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring -- almost as though this were the point all along -- that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about.  Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientized gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing.”  Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry.  How can I become more creative?  How can I make better decisions?  How can I be happier?  Or thinner?  Never fear:  brain research has the answers.  It is self-help armored in hard science.  Life advice is the hook for nearly all such books.  (Some cram the hard sell right into the title -- such as John B. Arden’s Rewire Your Brain: Think Your Way to a Better Life.)  Quite consistently, heir recommendations boil down to a kind of neo-Stoicism, drizzled with brain-juice.  In a self-congratulatory egalitarian age, you can no longer tell people to improve themselves morally.  So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago.  And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty.  Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes.  This is very hard to justify given how little we currently know about what different regions of the brain actually do.”  Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve.  In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

SHADES OF GREY


The human brain, it is said, is the most complex object in the known universe.  That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions.  Nor do we have the faintest clue about the biggest mystery of all -- how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph?  How come the brain gives rise to the mind?  No one knows.

So, instead, here is a recipe for writing a hit popular brain book.  You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril.  You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair.  You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being.  Voilà, a laboratory-sanctioned Big Idea in digestible narrative form.  This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell.  A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer.  The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls, and zany overinterpretation of research findings in neuroscience and psychology.  Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless,” but such garbage needs to be denounced precisely in defense of the achievements of science.  (In a sense, as Paul Fletcher points out, such books are “anti-science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.)  More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker.  To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit.  Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals.  Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others.  (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.)  The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled "Smart Thinking," stocked with pop brain tracts.  The true function of such books, of course, is to free readers from the responsibility of thinking for themselves.  This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning.  “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy.  I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC.  You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want.  In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day.  Our own brain-as-computer metaphor has been around for decades:  there is the “hardware," made up of different physical parts (the brain), and the “software,” processing routines that use different neuronal “circuits.”  Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink -- whose motivational selfhelp slogan is that “we can control rapid cognition” -- burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software,” though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.)  But these writers tend to reach for just one functional story about a brain subsystem -- the story that fits with their Big Idea -- while ignoring other roles the same system might play.  This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition,” as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions,” as he wrote in Imagine ?  (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.)  Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathized, and felt the other’s pain.”  If I tell you to use your mirror neurons, do you know what to do?  Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex?  Self-help can be a tricky business.

CHERRY-PICKING


Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region:  the amygdala.  It is routinely described as the “ancient” or “primitive” brain, scarily atavistic.  There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularizers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory.  The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal.  (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.)  René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical -- a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls.  It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography.  Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow, and green lighting up what looks like a black intracranial vacuum.  In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image.  The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space.  And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people -- even neuroscience undergrads -- are more likely to believe a brain scan than a bar graph.”

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn,” but popular magazines, science websites, and books are frenzied consumers and hypers of these scans.  “This is your brain on music,” announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music.  The “This is your brain on” meme, it seems, is indefinitely extensible:  Google results offer “This is your brain on poker," “This is your brain on metaphor,” “This is your brain on diet soda,” “This is your brain on God,” and so on, ad nauseam.  I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless:  there is beautiful and amazing science in how they work and what well-designed experiments can teach us.  “One of my favorites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day).  This to me demonstrates something important -- that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report.  With measures like that, we can begin to see how valuable it is to measure brain activity -- it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error.  It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view.  The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

--Steven Poole is the author of the forthcoming book You Aren’t What You Eat, which will be published by Union Books in October.