Quantcast
Channel: Special Articles - Committee for Skeptical Inquiry
Viewing all articles
Browse latest Browse all 856

Losing Our Minds in the Age of Brain Science

$
0
0

Neuroscience and its new brain imaging tools are great achievements of modern science. But they are vulnerable to being oversold by the media, some overzealous scientists, and neuroentrepreneurs.

scientist looks at brain images

You’ve seen the headlines: This is your brain on love. Or God. Or envy. Or happiness. And they’re reliably accompanied by articles boasting pictures of color-drenched brains—scans capturing Buddhist monks meditating, addicts craving cocaine, and college sophomores choosing Coke over Pepsi. The media—and even some neuroscientists, it seems—love to invoke the neural foundations of human behavior to explain everything from the Bernie Madoff financial fiasco to our slavish devotion to our iPhones, the sexual indiscretions of politicians, conservatives’ dismissal of global warming, and even an obsession with self-tanning.

Brains are big on campus, too. Take a map of any major university, and you can trace the march of neuroscience from research labs and medical centers into schools of law and business and departments of economics and philosophy. In recent years, neuroscience has merged with a host of other disciplines, spawning such new areas of study as neurolaw, neuroeconomics, neurophilosophy, neuromarketing, and neurofinance. Add to this the birth of neuroaesthetics, neurohistory, neuroliterature, neuromusicology, neuropolitics, and neurotheology. The brain has even wandered into such unlikely redoubts as English departments, where professors debate whether scanning subjects’ brains as they read passages from Jane Austen novels represents (a) a fertile inquiry into the power of literature or (b) a desperate attempt to inject novelty into a field that has exhausted its romance with psychoanalysis and postmodernism.

Clearly, brains are hot. Once the largely exclusive province of neuroscientists and neurologists, the brain has now entered the popular mainstream. As a newly minted cultural artifact, the brain is portrayed in paintings, sculptures, and tapestries and put on display in museums and galleries. One science pundit noted, “If Warhol were around today, he’d have a series of silkscreens dedicated to the cortex; the amygdala would hang alongside Marilyn Monroe.”

The prospect of solving the deepest riddle humanity has ever contemplated—itself—by studying the brain has captivated scholars and scientists for centuries. But never before has the brain so vigorously engaged the public imagination. The prime impetus behind this enthusiasm is a form of brain imaging called functional magnetic resonance imaging (fMRI), an instrument that came of age a mere two decades ago, which measures brain activity and converts it into the now-iconic vibrant images one sees in the science pages of the daily newspaper.

As a tool for exploring the biology of the mind, neuroimaging has given brain science a strong cultural presence. As one scientist remarked, brain images are now “replacing Bohr’s planetary atom as the symbol of science.” With its implied promise of decoding the brain, it is easy to see why brain imaging would beguile almost anyone interested in pulling back the curtain on the mental lives of others: politicians hoping to manipulate voter attitudes, marketers tapping the brain to learn what consumers really want to buy, agents of the law seeking an infallible lie detector, addiction researchers trying to gauge the pull of temptations, psychologists and psychiatrists seeking the causes of mental illness, and defense attorneys fighting to prove that their clients lack malign intent or even free will.

The problem is that brain imaging cannot do any of these things—at least not yet.

Author Tom Wolfe was characteristically prescient when he wrote of fMRI in 1996, just a few years after its introduction, “Anyone who cares to get up early and catch a truly blinding twenty-first century dawn will want to keep an eye on it.” Now we can’t look away.

Why the fixation? First, of course, there is the very subject of the scans: the brain itself. More complex than any structure in the known cosmos, the brain is a masterwork of nature endowed with cognitive powers that far outstrip the capacity of any silicon machine built to emulate it. Containing roughly eighty billion brain cells, or neurons, each of which communicates with thousands of other neurons, the three-pound universe cradled between our ears has more connections than there are stars in the Milky Way. How this enormous neural edifice gives rise to subjective feelings is one of the greatest mysteries of science and philosophy.

Now combine this mystique with the simple fact that pictures—in this case, brain scans—are powerful. Of all our senses, vision is the most developed. There are good evolutionary reasons for this arrangement: The major threats to our ancestors were apprehended visually; so were their sources of food. Plausibly, the survival advantage of vision gave rise to our reflexive bias for believing that the world is as we perceive it to be, an error that psychologists and philosophers call “naive realism.” This misplaced faith in the trustworthiness of our perceptions is the wellspring of two of history’s most famously misguided theories: that the world is flat and that the sun revolves around the Earth. For thousands of years, people trusted their raw impressions of the heavens. Yet, as Galileo understood all too well, our eyes can deceive us. He wrote in his Dialogues of 1632 that the Copernican model of the heliocentric universe commits a “rape upon the senses”—it violates everything our eyes tell us.

Brain scan images are not what they seem either—or at least not how the media often depict them. They are not photographs of the brain in action in real time. Scientists can’t just look “in” the brain and see what it does. Those beautiful color-dappled images are actually representations of particular areas in the brain that are working the hardest—as measured by increased oxygen consumption—when a subject performs a task such as reading a passage or reacting to stimuli, such as pictures of faces. The powerful computer located within the scanning machine transforms changes in oxygen levels into the familiar candy-colored splotches indicating the brain regions that become especially active during the subject’s performance. Despite well-informed inferences, the greatest challenge of imaging is that it is very difficult for scientists to look at a fiery spot on a brain scan and conclude with certainty what is going on in the mind of the person.

Neuroimaging is a young science, barely out of its infancy, really. In such a fledgling enterprise, the half-life of facts can be especially brief. To regard research findings as settled wisdom is folly, especially when they emanate from a technology whose implications are still poorly understood. As any good scientist knows, there will always be questions to hone, theories to refine, and techniques to perfect. Nonetheless, scientific humility can readily give way to exuberance. When it does, the media often seem to have a ringside seat at the spectacle.

Several years ago, as the 2008 presidential election season was gearing up, a team of neuroscientists from UCLA sought to solve the riddle of the undecided, or swing, voter. They scanned the brains of swing voters as they reacted to photos and video footage of the candidates. The researchers translated the resultant brain activity into the voters’ unspoken attitudes and, together with three political consultants from a Washington, D.C.–based firm called FKF Applied Research, presented their findings in the New York Times in an op-ed titled “This Is Your Brain on Politics.” There, readers could view scans dotted with tangerine and neon-yellow hot spots indicating regions that “lit up” when the subjects were exposed to images of Hillary Clinton, Mitt Romney, John Edwards, and other candidates. Revealed in these activity patterns, the authors claimed, were “some voter impressions on which this election may well turn.” Among those impressions was that two candidates had utterly failed to “engage” with swing voters. Who were these unpopular politicians? John McCain and Barack Obama, the two eventual nominees for president.

Another much-circulated study, published in 2008, “The Neural Cor­relates of Hate” came from neuroscientists at University College London. The researchers asked subjects to bring in photos of people they hated—generally ex-lovers, work rivals, or reviled politicians—as well as people about whom subjects felt neutrally. By comparing their responses—that is, patterns of brain activation elicited by the hated face—with their reaction to the neutral photos, the team claimed to identify the neurological correlates of intense hatred. Not surprisingly, much of the media coverage attracted by the study flew under the headline: “‘Hate Circuit’ Found in Brain.”

One of the researchers, Semir Zeki, told the press that brain scans could one day be used in court—for example, to assess whether a murder suspect felt a strong hatred toward the victim. Not so fast. True, these data do reveal that certain parts of the brain become more active when people look at images of people they hate and presumably feel contempt for. The problem is that the illuminated areas on the scan are activated by many other emotions, not just hate. There is no newly discovered collection of brain regions that are wired together in such a way that they comprise the identifiable neural counterpart of hatred.

University press offices, too, are notorious for touting sensational details in their media-friendly releases: Here’s a spot that lights up when subjects think of God (“Religion Center Found!”), or researchers find a region for love (“Love Found in the Brain!”). Neuroscientists sometimes refer disparagingly to these studies as “blobology,” their tongue-in-cheek label for studies that show which brain areas become activated as subjects experience X or perform task Y. To repeat: It’s all too easy for the nonexpert to lose sight of the fact that fMRI and other brain-imaging techniques do not literally read thoughts or feelings. By obtaining measures of brain oxygen levels, they show which regions of the brain are more active when a person is thinking, feeling, or, say, reading or calculating. But it is a rather daring leap to go from these patterns to drawing confident inferences about how people feel about political candidates or paying taxes, or what they experience in the throes of love.

Pop neuroscience makes an easy target, we know. Yet we invoke it because these studies garner a disproportionate amount of media coverage and shape public perception of what brain imaging can tell us. Skilled science journalists cringe when they read accounts claiming that scans can capture the mind itself in action. Serious science writers take pains to describe quality neuroscience research accurately. Indeed, an eddy of discontent is already forming. “Neuromania,” “neurohubris,” and “neurohype”—“neurobollocks,” if you’re a Brit—are just some of the labels that have been brandished, sometimes by frustrated neuroscientists themselves. But in a world where university press releases elbow one another for media attention, it’s often the study with a buzzy storyline (“Men See Bikini-Clad Women as Objects, Psychologists Say”) that gets picked up and dumbed down.

The problem with such mindless neuroscience is not neuroscience itself. The field is one of the great intellectual achievements of modern science. Its instruments are remarkable. The goal of brain imaging, which is merely one of its tools, is enormously important and fascinating: to bridge the explanatory gap between the intangible mind and the corporeal brain. But that relationship is extremely complex and incompletely understood. Therefore, it is vulnerable to being oversold by the media, some overzealous scientists, and neuroentrepreneurs who tout facile conclusions that reach far beyond what the current evidence warrants—fits of “premature extrapolation,” as British neuroskeptic Steven Poole calls them. When it comes to brain scans, seeing may be believing, but it isn’t necessarily understanding.

Some of the misapplications of neuroscience are amusing and essentially harmless. Take, for instance, the new trend of neuromanagement books such as Your Brain and Business: The Neuroscience of Great Leaders, which advises nervous CEOs “to be aware that anxiety centers in the brain connect to thinking centers, including the PFC [prefrontal cortex] and ACC [anterior cingulate cortex].” The fad has, perhaps not surprisingly, infiltrated the parenting and education markets, too. Parents and teachers are easy marks for “brain gyms,” “brain-compatible education,” and “brain-based parenting,” not to mention dozens of other unsubstantiated techniques. For the most part, these slick enterprises merely dress up or repackage good advice with neuroscientific findings that add nothing to the overall program. As one cognitive psychologist quipped, “Unable to persuade others about your viewpoint? Take a Neuro-Prefix—influence grows or your money back.”

But reading too much into brain scans matters when real-world concerns hang in the balance. Consider the law. When a person commits a crime, who is at fault? The perpetrator or his or her brain? Of course, this is a false choice. If biology has taught us anything, it is that “my brain” versus “me” is a false distinction. Still, if biological roots can be identified—and better yet, captured on a brain scan as juicy blotches of color—it is too easy for nonprofessionals to assume that the behavior under scrutiny must be “biological” and therefore “hardwired,” involuntary, or uncontrollable. Criminal lawyers, not surprisingly, are increasingly drawing on brain images supposedly showing a biological defect that “made” their clients commit murder. Looking to the future, some neuroscientists envision a dramatic transformation of criminal law. David Eagleman, for one, welcomes a time when “we may someday find that many types of bad behavior have a basic biological explanation [and] eventually think about bad decision making in the same way we think about any physical process, such as diabetes or lung disease.” As this comes to pass, he predicts, “more juries will place defendants on the not-blameworthy side of the line.”

But is this the correct conclusion to draw from neuroscientific data? After all, if every behavior is eventually traced to detectable correlates of brain activity, does this mean we can one day write off all troublesome behavior on a don’t-blame-me-blame-my-brain theory of crime? Will no one ever be judged responsible? Thinking through these profoundly important questions turns on how we understand the relationship between the brain and the mind.

The mind cannot exist without the brain. Virtually all modern scientists, ourselves included, are “mind-body monists”: they believe that mind and brain are composed of the same material “stuff.” All subjective experience, from a frisson of fear to the sweetness of nostalgia, corresponds to physical events in the brain. Decapitation proves this point handily: no functioning brain, no mind. But even though the mind is produced by the action of neurons and brain circuits, the mind is not identical with the matter that produces it. There is nothing mystical or spooky about this statement, nor does it imply an endorsement of mind-body “dualism,” the dubious assertion that mind and brain are composed of different physical material. Instead, it means simply that one cannot use the physical rules from the cellular level to completely predict activity at the psychological level. By way of analogy, if you wanted to understand the text on this page, you could analyze the words by submitting their contents to an inorganic chemist, who could ascertain the precise molecular composition of the ink. Yet no amount of chemical analysis could help you understand what these words mean, let alone what they mean in the context of the other words on the page.

Scientists have made great strides in reducing the organizational complexity of the brain from the intact organ to its constituent neurons, the proteins they contain, genes, and so on. Using this template, we can see how human thought and action unfold at a number of explanatory levels, working upward from the most basic elements. At one of the lower tiers in this hierarchy is the neurobiological level, which comprises the brain and its constituent cells. Genes direct neuronal development; neurons assemble into brain circuits. Information processing, or computation, and neural network dynamics hover above. At the middle level are conscious mental states, such as thoughts, feelings, perceptions, knowledge, and intentions. Social and cultural contexts, which play a powerful role in shaping our thoughts, feelings, and behavior, occupy the highest landings of the hierarchy. Problems arise, however, when we ascribe too much importance to the brain-based explanations and not enough to psychological or social ones. Just as one obtains differing perspectives on the layout of a sprawling city while ascending in a skyscraper’s glass elevator, we can gather different insights into human behavior at different levels of analysis.

The key to this approach is recognizing that some levels of explanation are more informative for certain purposes than others. This principle is profoundly important in therapeutic intervention. A scientist trying to develop a medication for Alzheimer’s disease will toil on the lower levels of the explanatory ladder, perhaps developing compounds aimed at preventing the formation of the amyloid plaques and neurofibrillary tangles endemic to the disease. A marriage counselor helping a distraught couple, though, must work on the psychological level. Efforts by this counselor to understand the couple’s problems by subjecting their brains to fMRIs could be worse than useless because doing so would draw attention away from their thoughts, feelings, and actions toward each other—the level at which intervention would be most helpful.

This discussion brings us back to brain scans and other representations of brain-derived data. What can we infer from this information about what people are thinking and feeling or how their social world is influencing them? In a way, imaging rekindles the age-old debate over whether brain equals mind. Can we ever fully comprehend the psychological by referring to the neural? This “hard problem,” as philosophers call it, is one of the most daunting puzzles in all of scientific inquiry. What would the solution even look like? Will the parallel languages of neurobiology and mental life ever converge on a common vernacular?

Many believe it will. According to neuroscientist Sam Harris, inquiry into the brain will eventually and exhaustively explain the mind and, hence, human nature. Ultimately, he says, neuroscience will—and should—dictate human values. Semir Zeki, the British neuroscientist, and legal scholar Oliver Goodenough hail a “‘millennial’ future, perhaps only decades away, [when] a good knowledge of the brain’s system of justice and of how the brain reacts to conflicts may provide critical tools in resolving international political and economic conflicts.” No less towering a figure than neuroscientist Michael Gazzaniga hopes for a “brain-based philosophy of life” based on an ethics that is “…built into our brains. A lot of suffering, war, and conflict could be eliminated if we could agree to live by them more consciously.”

It’s no wonder, then, that some see neuroscientists as the “new high priests of the secrets of the psyche and explainers of human behavior in general.” Will we one day replace government bureaucrats with neurocrats? Though short on details—neuroscientists don’t say how brain science is supposed to determine human values or achieve world peace—their predictions are long on ambition. In fact, some experts talk of neuroscience as if it is the new genetics, that is, just the latest overarching narrative commandeered to explain and predict virtually all of human behavior. And before genetic determinism there was the radical behaviorism of B.F. Skinner, who sought to explain human behavior in terms of rewards and punishments. Earlier in the late nineteenth and twentieth centuries, Freudianism posited that people were the products of unconscious conflicts and drives. Each of these movements suggested that the causes of our actions are not what we think they are. Is neurodeterminism poised to become the next grand narrative of human behavior?

As a psychiatrist and a psychologist, we have followed the rise of popular neuroscience with mixed feelings. We’re delighted to see laypeople so interested in brain science, and we are excited by the promise of new neurophysiological discoveries. Yet we’re dismayed that much of the media diet consists of “vulgarized neuroscience,” as the science watchdog Neuroskeptic puts it, that offers facile and overly mechanistic explanations for complicated behaviors. We were both in training when modern neuroimaging techniques made their debut. The earliest major functional imaging technique (PET, or positron emission tomography) appeared in the mid-1980s. Less than a decade later, the near wizardry of fMRI was unveiled and soon became a prominent instrument of research in psychology and psychiatry. Indeed, expertise in imaging technology is becoming a sine qua non for graduate students in many psychology programs, increasing their odds of obtaining federal research grants and teaching posts and boosting the acceptance rates of their papers by top-flight journals. Many psychology departments now make expertise in brain imaging a requirement for their new hires.

The brain is said to be the final scientific frontier, and rightly so, in our view. Yet in many quarters brain-based explanations appear to be granted a kind of inherent superiority over all other ways of accounting for human behavior. We call this assumption “neurocentrism”—the view that human experience and behavior can be best explained from the predominant or even exclusive perspective of the brain. From this popular vantage point, the study of the brain is somehow more “scientific” than the study of human motives, thoughts, feelings, and actions. By making the hidden visible, brain imaging has been a spectacular boon to neurocentrism.

Consider addiction. “Understanding the biological basis of pleasure leads us to fundamentally rethink the moral and legal aspects of addiction,” writes neuroscientist David Linden. This is popular logic among addiction experts, but to us, it makes little sense. Granted, there may be good reasons to reform the way the criminal justice system deals with addicts, but the biology of addiction is not one of them. Why? Because the fact that addiction is associated with neurobiological changes is not, in itself, proof that the addict is unable to choose. Just look at American actor Robert Downey Jr. He was once a poster boy for drug excess. “It’s like I have a loaded gun in my mouth and my finger’s on the trigger, and I like the taste of gunmetal,” he said. It seemed only a matter of time before he would meet a horrible end. But Downey entered rehab and decided to change his life. Why did Downey use drugs? Why did he decide to stop and to remain clean and sober? An examination of his brain, no matter how sophisticated the probe, could not tell us why and perhaps never will. The key problem with neurocentrism is that it devalues the importance of psychological explanations and environmental factors, such as familial chaos, stress, and widespread access to drugs, in sustaining addiction.

Brain imaging and other neuroscience techniques hold enormous potential for elucidating the neural correlates of everyday decisions, addiction, and mental illness. Yet these promising new technologies must not detract from the importance of levels of analysis other than the brain in explaining human behavior. Ours is an age in which brain research is flourishing—a time of truly great expectations. Yet it is also a time of mindless neuroscience that leads us to overestimate how much neuroscience can improve legal, clinical, and marketing practices, let alone inform social policy. Naive media, slick neuroentrepreneurs, and even an occasional overzealous neuroscientist exaggerate the capacity of scans to reveal the contents of our minds, exalt brain physiology as inherently the most valuable level of explanation for understanding behavior, and rush to apply underdeveloped, if dazzling, science for commercial and forensic use.

Granted, it is only natural that advances in knowledge about the brain make us think more mechanistically about ourselves. But if we become too carried away with this view, we may impede one of the most challenging cultural projects looming in the years ahead: how to reconcile advances in brain science with personal, legal, and civic notions of freedom.

The neurobiological domain is one of brains and physical causes. The psychological domain, the domain of the mind, is one of people and their motives. Both are essential to a full understanding of why we act as we do and to the alleviation of human suffering. The brain and the mind are different frameworks for explaining experience. And the distinction between them is hardly an academic matter; it bears crucial implications for how we think about human nature, personal responsibility, and moral action.


This article is adapted from the authors’ new book, Brainwashed: The Seductive Appeal of Mindless Neuroscience (Basic Books, 2013). Extensive notes for this article (five pages) can be found in the book.


Viewing all articles
Browse latest Browse all 856

Trending Articles