What are we getting wrong in neuroscience?

In 1935, an ambitious neurology professor named Egas Moniz sat in the audience at a symposium on the frontal lobes, enthralled by neuroscientist Carlyle F. Jacobsen's description of some experiments Jacobsen had conducted with fellow investigator John Fulton. Jacobsen and Fulton had damaged the frontal lobes of a chimpanzee named "Becky," and afterwards they had observed a considerable behavioral transformation. Becky had previously been stubborn, erratic, and difficult to train, but post-operation she became placid, imperturbable, and compliant. 

Moniz had already been thinking about the potential therapeutic value of frontal lobe surgery in humans after reading some papers about frontal lobe tumors and how they affected personality. He believed that some mental disorders were caused by static abnormalities in frontal lobe circuitry. By removing a portion of the frontal lobes, he hypothesized he would also be removing neurons and pathways that were problematic, in the process alleviating the patient's symptoms. Although Moniz had been pondering this possibility, Jacobsen's description of the changes seen in Becky was the impetus Moniz needed to try a similar approach with humans. He did so just three months after seeing Jacobsen's presentation, and the surgical procedure that would come to be known as the frontal lobotomy was born.

Moniz's procedure initially involved drilling two holes in a patient's skull, then injecting pure alcohol subcortically into the frontal lobes, with the hopes of destroying the regions where the mental disorder resided. Moniz soon turned to another tool for ablation, however---a steel loop he called a leucotome (which is Greek for "white matter knife")---and began calling the procedure a prefrontal leucotomy. Although his means of assessing the effectiveness of the procedure were inadequate by today's standards---for example, he generally only monitored patients for a few days after the surgery---Moniz reported recovery or improvement in most of the patients who underwent the procedure. Soon, prefrontal leucotomies were being done in a number of countries throughout the world. 

The operation attracted the interest of neurologist Walter Freeman and neurosurgeon James Watts. They modified the procedure again, this time to involve entering the skull from the side using a large spatula. Once inside the cranium, the spatula was wiggled up and down in the hopes of severing connections between the thalamus and prefrontal cortex (based on the hypothesis that these connections were crucial for emotional responses, and could precipitate a disorder when not functioning properly). They also renamed the procedure "frontal lobotomy," as leucotomy implied only white matter was being removed and that was not the case with their method. 

Several years later (in 1946), Freeman made one final modification to the procedure. He advocated for using the eye socket as an entry point to the frontal lobes (again to sever the connections between the thalamus and frontal areas). As his tool to do the ablation, he chose an ice pick. The ice pick was inserted through the eye socket, wiggled around to do the cutting, and then removed. The procedure could be done in 10 minutes; the development of this new "transorbital lobotomy" brought about the real heyday of lobotomy.

The introduction of transorbital lobotomy led to a significant increase in the popularity of the operation---perhaps due to the ease and expediency of the procedure. Between 1949 and 1952, somewhere around 5,000 lobotomies were conducted each year in the United States (the total number of lobotomies done by the 1970s is thought to have been between 40,000 and 50,000). Watts strongly protested the transformation of lobotomy into a procedure that could be done in one quick office visit---and done by a psychiatrist instead of a surgeon, no less---which caused he and Freeman to sever their partnership.

Freeman, however, was not discouraged; he became an ardent promoter of transorbital lobotomy. He traveled across the United States, stopping at mental asylums to perform the operation on any patients who seemed eligible and to train the staff to perform the surgery after he had moved on. Freeman himself is thought to have performed or supervised around 3,500 lobotomies; his patients included a number of minors and a 4-year old child (who died 3 weeks after the procedure). 

Eventually, however, the popularity of transorbital lobotomy began to fade. One would like to think that this happened because people recognized how barbaric the procedure was (along with the fact that the approach was based on somewhat flimsy scientific rationale). The real reasons for abandoning the operation, however, were more pragmatic. The downfall of lobotomy began with some questions about the effectiveness of the surgery, especially in treating certain conditions like schizophrenia. It was also recognized that some types of cognition like motivation, spontaneity, and abstract thought suffered irreparably from the procedure. And the final nail in the coffin of lobotomy was the development of psychiatric drugs like chlorpromazine, which for the first time gave clinicians a pharmacological option for intractable cases of mental disorders. 

It is easy for us now to look at the practice of lobotomy as nothing short of brutality, and to scoff at what seems like a tenuous scientific explanation for why the procedure should work. It's important, however, to look at such issues in the history of science in the context of their time. In an age when effective psychiatric drugs were nonexistent, psychosurgical interventions were viewed as the "wave of the future." They offered a hopeful possibility for treating disorders that were often incurable and potentially debilitating. And while the approach of lobotomy seems far too non-selective (meaning such serious brain damage was not likely to affect just one mental faculty) to us now, the idea that decreasing frontal lobe activity might reduce mental agitation was actually based on the available scientific literature at the time.

Still, it's clear that the decision to attempt to treat psychiatric disorders through inflicting significant brain damage represented a failure of logic at multiple levels. When we discuss neuroscience today, we often assume that our days of such egregious mistakes are over. And while we have certainly progressed since the time of lobotomies (especially in the safeguards protecting patients from such untested and dangerous treatments), we are not that far removed temporally from this sordid time in the history of neuroscience. Today, there is still more unknown about the brain than there is known, and thus it is to be expected that we continue to make significant mistakes in how we think about brain function, experimental methods in neuroscience, and more.

Some of these mistakes may be due simply to a natural human approach to understanding difficult problems. For example, when we encounter a complex problem we often first attempt to simplify it by devising some straightforward way of describing it. Once a basic appreciation is reached, we add to this elementary knowledge to develop a more thorough understanding---and one that is more likely to be a better approximation of the truth. However, that overly simplistic conceptualization of the subject can give birth to countless erroneous hypotheses when used in an attempt to explain something as intricate as neuroscience. And in science, these types of errors can lead a field astray for years before it finds its way back on course.

Other mistakes involve research methodology. Due to the rapid technological advances in neuroscience that have occurred in the past half-century, we have some truly amazing neuroscience research tools available to us that would have only been science fiction 100 years ago. Excitement about these tools, however, has caused researchers in some cases to begin utilizing them extensively before we are fully prepared to do so. This has resulted in using methods that cannot yet answer the questions we presume they can, and has provided us with results that we are sometimes unable to accurately interpret. In accepting the answers we obtain as legitimate and assuming our interpretations of results are valid, we may commit errors that can confound hypothesis development for some time.

Advances in neuroscience in the 20th and into the 21st century have been nothing short of mind-boggling, and our successes in understanding far outpace our long-standing failures. However, any scientific field is rife with mistakes, and neuroscience is no different. In this article, I will discuss just a few examples of how missteps and misconceptions continue to affect progress in the field of neuroscience.

The ________________ neurotransmitter

Nowadays the fact that neurons use signaling molecules like neurotransmitters to communicate with one another is one of those pieces of scientific knowledge that is widely known even to non-scientists. Thus, it may be a bit surprising that this understanding is less than 100 years old. It was in 1921 that the German scientist Otto Loewi first demonstrated that, when stimulated, the vagus nerve releases a chemical substance that can affect the activity of nearby cells. Several years later, that substance was isolated by Henry Dale and determined to be acetylcholine (at that point a substance that had already been identified, just not as a neurotransmitter). It wasn't until the middle of the 20th century, however, that it became widely accepted that neurotransmitters were used throughout the brain. The discovery of other neurotransmitters and neuropeptides would be scattered throughout the second half of the 20th century.

Of course, each time a new neurotransmitter (or other signaling molecule like a neuropeptide) is discovered, one of the first questions scientists want to answer is "what is its role in the brain?" The approach to answering this question generally involves some degree of simplification, as researchers seem to search for one overriding function that can be used to describe the neurotransmitter. Resultantly, often the first really intriguing function discovered for a neurotransmitter becomes a way of defining it.

Gradually, the discovered functions for the neurotransmitter become so diverse that it is no longer rational to attach one primary role to it, and researchers are forced to revise their initial explanations of the neurotransmitter's function by incorporating new discoveries. Sometimes it is later found that the original function linked to the neurotransmitter does not even match up well with the tasks the chemical is actually responsible for in the brain. The idea that the neurotransmitter has one main function, however, can be difficult to convince people to forget. This becomes a problem because that inaccurate conceptualization may lead to years of research seeking evidence to support a particular role for the neurotransmitter, while that hypothesized role may be misunderstood---or outright erroneous. 

The neuropeptide oxytocin provides a good example case of this phenomenon. The history of oxytocin begins with the same Henry Dale mentioned above. In 1906, Dale found that oxen pituitary gland extracts could speed up uterine contractions when administered to a variety of mammals including cats, dogs, rabbits, and rats. This discovery soon led to the exploration of the use of similar extracts to assist in human childbirth; it was found they could be especially helpful in facilitating labor that was progressing slowly. Its effects on childbirth would be where oxytocin earned its name, which is derived from the Greek words for "quick birth." 

Clinical use of oxytocin didn't become widespread until researchers were able to synthesize oxytocin in the laboratory. But after that occurred in the 1950s, oxytocin became the most commonly used agent to induce labor throughout the world (sold under the trade names Pitocin and Syntocinon). However, despite the fact that oxytocin plays such an important role in a significant percentage of pregnancies today, the vast majority of research and related news on oxytocin in the past few decades has involved very different functions for the hormone: love, trust, and social bonding.

This line of research can be traced back to the 1970s when investigators learned that oxytocin reached targets throughout the brain, suggesting it might play a role in behavior. Soon after, researchers found that oxytocin injections could prompt virgin female rats to exhibit maternal behaviors like nest building. Researchers then began exploring oxytocin's possible involvement in a variety of social interactions ranging from sexual behavior to aggression. In the early 1990s, some discoveries of oxytocin's potential contribution to forming social bonds emerged from an uncommon species to use as a research subject: the prairie vole.

prairie voles.  Image courtesy of thenerdpatrol, (prairie voles omsi).

prairie voles. Image courtesy of thenerdpatrol, (prairie voles omsi).

The prairie vole is a small North American rodent that looks kind of like a cross between a gopher and a mouse. They are somewhat unremarkable animals except for one unusual feature of their social lives: they form what seem to be primarily monogamous long-term relationships with voles of the opposite sex. This is not common among mammals, as estimates are that only somewhere between less than 3% to less than 5% of mammalian species display evidence of monogamy

A monogamous rodent species creates an interesting opportunity to study monogamy in the laboratory. Researchers learned that female prairie voles begin to display a preference for a male---a preference that can lead to a long-term attachment---after spending just 24 hours in the same cage as the male. It was also observed that administration of oxytocin made it more likely females would develop this type of preference for a male vole, and treatment with an oxytocin antagonist made it less likely. Thus, oxytocin became recognized as playing a crucial part in the formation of heterosexual social bonds in the prairie vole---a discovery that would help to launch a torrent of research into oxytocin's involvement in social bonding and other prosocial behaviors.

When researchers then turned from rodents like prairie voles to attempt to understand the role oxytocin might play in humans, research findings that suggested oxytocin acted to promote positive emotions and behavior in people began to accumulate. Administration of oxytocin, for example, was found to increase trust. People with higher levels of oxytocin were observed to display greater empathy. Oxytocin administration was discovered to make people more generous and to promote faithfulness in long-term relationships. One study even found that petting a dog was associated with increased oxytocin levels---in both the human and the dog. Due to the large number of study results indicating a positive effect of oxytocin on socialization, the hormone earned a collection of new monikers including the love hormone, the trust hormone, and even the cuddle hormone.

Excited by all of these newfound social roles for oxytocin, researchers eagerly---and perhaps impetuously---began to explore the role of oxytocin deficits in psychiatric disorders along with the possibility of correcting those deficits with oxytocin administration. One disorder that has gained a disproportionate amount of attention in this regard is autism spectrum disorder, or autism. Oxytocin deficits seemed to be a logical explanation for autism since social impairment is a defining characteristic of the disorder, and oxytocin appeared to promote healthy social behavior. As researchers began to delve into the relationship between oxytocin levels in the blood and autism, however, they did not find what seemed to be a direct relationship. Undeterred, investigators explored the intranasal administration of oxytocin---which involves spraying the neuropeptide into the nasal passages---on symptoms in autism patients. And initially, there were indications intranasal oxytocin might be effective at improving autism symptoms (more on this below).

Soon, however, some began to question if all of the excitement surrounding the "trust hormone" had caused researchers to make hasty decisions regarding experimental design, for all of the studies using the intranasal method of oxytocin administration were using a method that wasn't---and still has not been---fully validated. Researchers turned to the intranasal method of administration because oxytocin that enters the bloodstream does not appear to cross the blood-brain barrier in appreciable amounts; there are indications, however, that the neuropeptide does make it into the brain via the intranasal route. The problem, however, is that even by intranasal delivery very little oxytocin reaches the brain---according to one estimate only .005% of the administered dose. Even when very high doses are used, the amount that reaches the brain via intranasal delivery does not seem comparable to the amount of oxytocin that must be administered directly into the brain (intracereboventricularly) of an animal to influence behavior.

But many studies have indicated an effect, so what is really going on here? One possibility is that the effects are not due to the influence of oxytocin on the central nervous system, but due to oxytocin entering the blood stream and interacting with the large number of oxytocin receptors in the peripheral nervous system; if true, this would mean that exogenous oxytocin is not having the effects on the brain researchers have hypothesized. Another more concerning possibility is that many of the studies published on the effects of intranasal oxytocin suffer from methodological problems like questionable statistical approaches to analyzing data. 

Indeed, criticisms of the statistical methods of some of the seminal papers in this field have been made publicly. A recent review also found that studies of intranasal oxytocin often involve small sample sizes; significant findings of small studies are more likely to be statistical aberrations and not representative of true effects. It is also probable that the whole area of research is influenced by publication bias, which is the tendency to publish reports of studies that observe significant results while neglecting to publish reports of studies that fail to see any notable effects. This may seem like a necessary evil, as journal readers are more likely to be interested in learning about new discoveries than experiments that yielded none. Ignoring non-significant findings, though, can lead to the exaggeration of the importance of an observed effect because the available literature may seem to indicate no conflicting evidence (even though such evidence might exist hidden away in the file drawers of researchers throughout the world).

These potential issues are underscored by the inconsistent research results and failed attempts at replicating, or repeating, studies that have reported significant effects of intranasal oxytocin. For example, one of the most influential studies on intranasal oxytocin, which found that oxytocin increases trust, has failed to replicate several times. And in many cases, null research findings have emerged after initial reports indicated a significant effect. The findings of the early autism studies mentioned above, for example, have been contradicted by multiple randomized controlled trials (see here and here) conducted in the last few years, which reported a lack of a significant therapeutic effect.

Not surprisingly, over the years the simple understanding of oxytocin as a neuropeptide that promotes positive emotions and behavior has also become more complicated as it was learned that the effects of oxytocin might not always be so rosy. In one study, for example, researchers observed intranasal oxytocin to be associated with increased envy and gloating. Another study found oxytocin increased ethnocentrism, or the tendency to view one's own ethnicity or culture as superior to others. And in a recent study, intranasal administration of oxytocin increased aggressive behavior. To add even more complexity to the picture, the effects of oxytocin may not be the same in men and women and may even be disparate in different individuals and different environmental contexts.

In an attempt to explain these discordant findings, researchers have proposed new interpretations of oxytocin's role in social behavior. One hypothesis, for example, suggests that oxytocin is involved in promoting responsiveness to any important social cue---whether it be positive (e.g. smiling) or negative (e.g. aggression); this is sometimes called the "social salience" hypothesis. Despite such recent efforts to reconcile the seemingly contradictory findings in oxytocin research, however, there is still not a consensus as to the effects of oxytocin, and the hypothesis that oxytocin is involved in positive social behavior continues to guide the majority of the research in this area.

Thus, for years now oxytocin research has centered on a role for the neuropeptide that is at best sensationalized and at worst deeply flawed. And oxytocin is only the most recent example of this phenomenon. In the 1990s, dopamine earned a reputation as the "pleasure neurotransmitter." Soon after, serotonin became known as the "mood neurotransmitter." These appellations were based on the most compelling discoveries linked to these neurotransmitters: dopamine is involved in processing rewarding stimuli, and serotonin is targeted by treatments for depression. 

However, now that we know more about these substances, it is clear these short definitions of functionality are much too simplistic. Not only are dopamine and serotonin involved in much more than reward and mood, respectively, but also the roles of these two neurotransmitters in reward and mood seem to be very complicated and poorly understood. For example, most researchers no longer think dopamine signaling causes pleasure, but that it is involved with other intricacies of memorable experiences like the identification of important stimuli in the environment---whether they be positive (i.e. rewarding) or negative. Likewise, that serotonin levels alone don't determine mood is now common knowledge in scientific circles (and is finding its way into public perception as well). Thus, these short, easy-to-remember titles are misleading---and somewhat useless.

In assigning one function to one neurotransmitter or neuropeptide, we overlook important facts like the understanding that these neurochemicals often have multiple receptor subtypes they act at, sometimes with drastically different effects. And we neglect to consider that different areas of the brain have different levels of receptors for each neurochemical, and may be preferentially populated with one receptor subtype over another---leading to different patterns of activity in different brain regions with different functional specializations. Add to that all of the downstream effects of receptor activation (which can vary significantly depending on the receptor subtype, area of the brain it is found, etc.), and you have an extremely convoluted picture. Trying to sum it up in one function is ludicrous.

Not only do these simplifying approaches hinder a more complete understanding of the brain, they lead to the wasting of countless hours of research and dollars of research funding pursuing the confirmation of ideas that will likely have to be replaced eventually with something more elaborate. Regardless, this type of simplification in science does seem to serve a purpose. Our brains gravitate towards these straightforward ways of explaining things---possibly because without some comprehensible framework to start from, understanding something as complex as the brain seems like a Herculean task. However, if we are going to utilize this approach, we should at least do it with more awareness of our tendency to do so. By recognizing that, when it comes to the brain, the story we are telling is almost always going to be much more complicated than we are inclined to believe, perhaps we can avoid committing the mistakes of oversimplification we have made in the past.

Psychotherapeutic drugs and the deficits they correct

Before the 1950s, the treatment of psychiatric disorders looked very different from how it does today. As discussed above, unrefined neurosurgery like a transorbital lobotomy was considered a viable approach to treating a variety of ailments ranging from panic disorder to schizophrenia. But lobotomy was only one of a number of potentially dangerous interventions used at the time that generally did little to improve the mental health of most patients. Pharmacological treatments were not much more refined, and often involved the use of agents that simply acted as strong sedatives to make a patient's behavior more manageable. 

The landscape began to change dramatically in the 1950s, however, when a new wave of pharmaceutical drugs became part of psychiatric treatment. The first antipsychotics for treating schizophrenia, the first antidepressants, and the first benzodiazepines to treat anxiety and insomnia were all discovered in this decade. Some refer to the 1950s as the "golden decade" of psychopharmacology, and the decades that followed as the "psychopharmacological revolution," since over this time the discovery and development of psychiatric drugs would progress exponentially; soon pharmacological treatments would be the preferred method of treating psychiatric illnesses.

The success of new psychiatric drugs over the second half of the twentieth century came as something of a surprise because the disorders these drugs were being used to treat were still poorly understood. Thus, drugs were often discovered to be effective through a trial and error process, i.e. test as many substances as we can and eventually maybe we'll find one that treats this condition. Because of how little was understood about the biological causes of these disorders, if a drug with a known mechanism was found to be effective in treating a disorder with an unknown mechanism, often it led to a hypothesis that the disorder must be due to a disruption in the system affected by the drug.

Antidepressants serve as a prime example of this phenomenon. Before the 1950s, a biological understanding of depression was essentially nonexistent. The dominant perspective of the day on depression was psychoanalytic---depression was caused by internal conflicts among warring facets of one's personality, and the conflicts were generally considered to be created by the internalization of troublesome or traumatic experiences that one had gone through earlier in life. The only non-psychoanalytic approaches to treatment involved poorly understood and generally unsuccessful procedures like electroconvulsive shock therapy---which was actually effective in certain cases, but potentially dangerous in others---and treatments like barbiturates or amphetamines, which didn't seem to target anything specific to depression but instead caused widespread sedation or stimulation, respectively.

The first antidepressants were discovered serendipitously. The story of iproniazid, one of the first drugs marketed specifically for depression---the other being imipramine, which underwent discovery and the first clinical uses at around the same time as iproniazid---is a good example of the serendipity involved. In the early 1950s, researchers were working with a chemical called hydrazine and investigating its derivatives for anti-tuberculosis properties (tuberculosis was a scourge at the time and it was routine to test any chemical for its potential to treat the disease). Interestingly, hydrazine derivatives may never have even been tested if the Germans hadn't used hydrazine as rocket fuel during World War II, causing large surpluses of the substance to be found at the end of the war and then sold to pharmaceutical companies on the cheap.

In 1952, a hydrazine derivative called iproniazid was tested on tuberculosis patients in Sea View Hospital on Staten Island, New York. Although the drug did not seem to be superior to other anti-tuberculosis agents in treating tuberculosis, a strange side effect was noted in these preliminary trials: patients who took iproniazid displayed increased energy and significant improvements in mood. One researcher reported that patients were "dancing in the halls 'tho there were holes in their lungs." Although at first largely overlooked as "side effects" of iproniazid treatment, eventually researchers became interested in the mood-enhancing effect of the drug in and of itself; before the end of the decade, the drug was being used to treat patients with depression. 

Around the same time as the discovery of the first antidepressant drugs, a new technique called spectrophotofluorimetry was being developed. This technique allowed researchers to detect changes in the levels of neurotransmitters called monoamines (e.g. dopamine, serotonin, norepinephrine) after the administration of drugs (like iproniazid) to animals. Spectrophotofluorimetry allowed researchers to determine that iproniazid and imipramine were having an effect on monamines. Specifically, the administration of these antidepressants was linked to an increase in serotonin and norepinephrine levels.

This discovery led to the first biological hypothesis of depression, which suggested that depression is caused by deficiencies in levels of serotonin and/or norepinephrine. At first, this hypothesis focused primarily on norepinephrine and was known as the "noradrenergic hypothesis of depression." Later, however---due in part to the putative effectiveness of antidepressant drugs developed to more specifically target the serotonin system---the emphasis would be placed more on serotonin's role in depression, and the "serotonin hypothesis of depression" would become the most widely accepted view of depression.

The serotonin hypothesis would go on to be endorsed not only by the scientific community, but also---thanks in large part to the frequent referral to a serotonergic mechanism in pharmaceutical ads for antidepressants---by the public at large. It would guide drug development and research for years. As the serotonin hypothesis was reaching its heyday, however, researchers were also discovering that it didn't seem to tell the whole story of the etiology of depression.

A number of problems with the serotonin hypothesis were emerging. One was that antidepressant drugs took weeks to produce a therapeutic benefit, but their effects on serotonin levels seemed to occur within hours after administration. This suggested that, at the very least, some mechanism other than increasing serotonin levels was involved in the therapeutic effects of the drugs. Other research that questioned the hypothesis began to accumulate as well. For example, experimentally depleting serotonin in humans was not found to cause depressive symptoms.

There is now a long list of experimental findings that question the serotonin and noradrenergic hypotheses (indeed, the area of research is muddied even further by evidence suggesting antidepressant drugs may not even be all that effective). Clearly, changes in monoamine levels are an effect of most antidepressants, but it does not seem that there is a direct relationship between serotonin or norepinephrine levels and depression. At a minimum, there must be another component to the mechanism.

For example, some have proposed that increases in serotonin levels are associated with the promotion of neurogenesis (the birth of new neurons) in the hippocampus, which is an important brain region for the regulation of the stress response. But recently researchers have also begun to deviate significantly from the serotonin hypothesis, suggesting bases for depression that are different altogether. One more recent hypothesis, for instance, focuses on a role for the glutamate system in the occurrence of depression. 

The serotonin hypothesis of depression is just one of many hypotheses of the biological causes of psychiatric disorders that were formulated based on the assumption that the primary mechanism of a drug that treats the disorder must be correcting the primary dysfunction that causes the disorder. The same logic was used to devise the dopamine hypothesis of schizophrenia. And the low arousal hypothesis of attention-deficit hyperactivity disorder. Both of these hypotheses were at one point the most commonly touted explanations for schizophrenia and ADHD, respectively, but now are generally considered too simplistic (at least in their original formulations).

The logic used to construct such hypotheses is somewhat tautological: drug A increases B and treats disorder C, thus disorder C is caused by a deficiency in B. It neglects to recognize that B may be just one factor influencing some downstream target, D, and thus the effects of the drug may be achieved in various ways, which B is just one of. It fails to appreciate the sheer complexity of the nervous system, and the multitudinous factors that likely are involved in the onset of psychiatric illness. These factors include not just neurotransmitters, but also hormones, genes, gene expression, aspects of the environment, and an extensive list of other possible influences. The complexity of psychiatry likely means there are an almost inconceivable number of ways for a disorder like depression to develop, and our understanding of the main pathways involved is likely still at a very rudimentary level.

Thus, when we simplify such a complex issue to rest primarily upon the levels of one neurotransmitter, we are making a similar type of mistake as discussed in the first section of this article, but perhaps with even greater repercussions. For the errors that result from simplifying psychiatric disorders into "one neurotransmitter" maladies are errors that affect not just progress in neuroscience, but also the mental and physical health of patients suffering from these disorders. Many of these patients are prescribed psychiatric drugs with the assumption their disorder is simple enough to be fixed by adjusting some "chemical imbalance"; perhaps it is not surprising then that psychiatric drugs are ineffective in a surprisingly large number of patients. And many patients continue taking such drugs---sometimes with minimal benefit---despite experiencing significant side effects. Thus, there is even more imperative in this area to move away from searching for simple answers based on known mechanisms and venture out into more intimidating and unknown waters.

Our faith in functional neuroimaging

As approaches to creating images of brain activity like positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) were developed in the second half of the twentieth century, they understandably sparked a great deal of excitement among neuroscientists. These methods allowed neuroscience to achieve something once thought to be impossible---the ability to see what was happening in the brain in (close to) real time. By monitoring cerebral blood flow using a technique like fMRI, one can tell which brain areas are receiving the most blood---and by extension which areas are most active neuronally---when someone is performing some action (e.g. completing a memory task, thinking about a loved one, viewing pictures of rewarding or aversive stimuli, etc.).

This method of neuroimaging, which finally allowed researchers to draw conclusions about elusive connections between structure and function, was dubbed functional neuromaging. Functional neuroimaging methods have predictably become some of the most popular research tools in neuroscience over the last few decades. fMRI surpassed PET as a preferred tool for functional neuroimaging soon after it was developed (due to a variety of factors including better spatial resolution and a less invasive approach), and it has been the investigative method of choice in over 40,000 published studies since the 1990s.

The potential of functional neuroimaging---and fMRI in particular---to unlock countless secrets of the brain intrigued not only investigators but also the popular press. The media quickly realized that the results of fMRI studies could be simplified, combined with some colorful pictures of brain scans, and sold to the public as representative of huge leaps in understanding which parts of the brain are responsible for certain behaviors or patterns of behaviors. The simplification of these studies led to incredible claims that intricate patterns of behavior and emotion like religion or jealousy emanated primarily from one area of the brain. 

Fortunately, this wave of sensationalism has died down a bit as many neuroscientists have been vocal about how this type of oversimplification is taken so far that it propagates untruths about the brain and misrepresents the capabilities of functional neuroimaging. The argument against oversimplifying fMRI results, though, is often an argument against oversimplification itself. The assumption is that the methodology is not flawed, but the interpretation is. More and more researchers, however, are asserting that not only are the reported results of neuroimaging experiments ripe for misinterpretation, but also they are often simply inaccurate.

One major problem with functional neuroimaging involves how the data from these experiments are handled. In fMRI, for example, the device creates a representation of the brain by dividing an image of the brain into thousands of small 3-D cubes called voxels. Each voxel can represent the activity of over a million neurons. Researchers must then analyze the data to determine which voxels are indicative of higher levels of blood flow, and these results are used to determine which areas of the brain are most active. Most of the brain is active at all times, however, so researchers must compare activity in each voxel to activity in that voxel during another task to determine if blood flow in a particular voxel is higher during the task they are interested in.

Due to the sheer volume of data, an issue arises with the task of deciding whether blood flow observed in a particular voxel is representative of activity above a baseline. Each fMRI image can consist of anywhere from 40,000 to 500,000 voxels, depending on the settings of the machine, and each experiment involves many images (sometimes thousands), each taken a couple of seconds apart. This creates a statistical complication called the multiple comparisons problem, which essentially states that if you perform a large number of tests, you are more likely to find one significant result simply by chance than if you performed just one test.

For example, if you flip a coin ten times it would be highly unlikely you would get tails nine times. But, if you flipped 50,000 coins ten times, you would be much more likely to see that result in at least one of the coins. That coin flip result is what, in experimental terms, we would call a false positive. If you're using a typical coin, getting nine tails out of ten flips doesn't tell you something about the inherent qualities of the coin---it's just a statistical aberration that occurred by chance. The same type of thing is more likely to occur when a researcher makes the sometimes millions of comparisons (between active and baseline voxels) involved with an fMRI study. By chance alone, it's likely some of them will appear to indicate a significant level of activity.

fMRi image of dead atlantic salmon. taken from bennett et al. (2009).

fMRi image of dead atlantic salmon. taken from bennett et al. (2009).

This problem was exemplified through an experiment conducted by a group of researchers in 2009 that involved an fMRI scan of a dead Atlantic salmon (yes, the fish). The scientists put the salmon in an fMRI scanner and showed the fish a collection of pictures depicting people engaged in different social situations. They went so far as to ask the salmon---again, a dead fish---what emotion the people in the photographs were experiencing. When the researchers analyzed their data without correcting for the multiple comparisons problem, they observed the miraculous: the dead fish appeared to display brain activity that indicated it was "thinking" about the emotions being portrayed in the photographs. Of course this wasn't what was really going on; instead, it was that the false positives emerging due to the multiple comparisons problem made it appear as if there was real activity occurring in the fish's brain when obviously there was not.

The salmon experiment shows how serious a concern the multiple comparisons problem can be when it comes to analyzing fMRI data. The problem, however, is a well-known issue by now, and most researchers correct for it in some way when statistically analyzing their neuroimaging data. Even today, however, not all do---a 2012 review of 241 fMRI studies found that the authors of 41% of them did not report making any adjustments to account for the multiple comparisons problem. Even when conscious attempts to avoid the multiple comparisons problem are made, though, there is still a question of how effective they are at producing reliable results.

For example, one method for dealing with the multiple comparisons problem that has become popular among fMRI researchers is called clustering. In this approach, only when clusters of contiguous voxels are active together is there enough cause to consider a region of the brain more active than baseline. Part of the rationale here is that if a result is legitimate, it is more likely to involve aggregates of active voxels, and so by focusing on clusters instead of individual voxels one can reduce the likelihood of false positives. 

The problem with clustering is that it doesn't always seem to work that well. For example, a study published this year analyzed fMRI data from close to 500 subjects using three of the most popular fMRI software packages and found that one common approach to clustering still led to a false positive rate of up to 70%. So, even when researchers take pains to account for the multiple comparisons problem, the results often don't seem to inspire confidence that the effect observed is real and not just a result of random fluctuations in brain activity. 

This isn't to say that no fMRI data should be trusted, or that fMRI shouldn't be used to explore brain activity. Rather, it suggests that much more care needs to be taken to ensure fMRI data is managed properly to avoid making erroneous conclusions. Unfortunately, however, the difficulties with fMRI don't begin and end with the multiple comparisons problem. Many fMRI studies also suffer from small sample sizes. This makes it more difficult to detect a true effect, and when some effect is observed it also means it is more likely to be a false positive. Additionally, it means that when a true effect is observed, the size of the effect is more likely to be exaggerated. Some researchers have also argued that neuroimaging research suffers from publication bias, which further inflates the importance of any significant findings because conflicting evidence may not be publicly available.

All in all, this suggests the need for more caution when it comes to conducting and interpreting the results of fMRI experiments. fMRI is an amazing technology that offers great promise in helping us to better understand the nervous system. However, functional neuroimaging is a relatively young field, and we are still learning how to properly use techniques like fMRI. It's to be expected, then---as with any new technology or recently developed field---that there will be a learning curve as we develop an appreciation for the best practices in how to obtain data and interpret results. Thus, while we continue to learn these things, we should use considerable restraint and a critical eye when assessing the results of functional neuroimaging experiments.


Progress in neuroscience over the past several centuries has changed our understanding of what it means to be human. Over that time, we learned that the human condition is inextricably connected to this delicate mass of tissue suspended in cerebrospinal fluid in our cranium. We discovered that most afflictions that affect our behavior originate in that tissue, and then we started to figure out ways to manipulate brain activity---through the administration of various substances both natural and man-made---to treat those afflictions. We developed the ability to observe activity in the brain as it occurs, making advances in understanding brain function that humans were once thought to be incapable of. And there are many research tools in neuroscience that are still being refined, but which hold the promise of even greater breakthroughs over the next 50 years.

The mistakes made along the way are to be expected. As a discipline grows, the accumulation of definitive knowledge does not follow a straight trajectory. Rather it involves an accurate insight followed by fumbling around in the dark for some time before making another truthful deduction. Neuroscience is no different. Although we have a tendency to think highly of our current state of knowledge in the field, chances are at any point in time it is still infested with errors. The goal is not to achieve perfection in that sense, but simply to remain cognizant of the impossibility of doing so. By recognizing that we never know as much as we think we know, and by frequently assessing which approaches to understanding are leading us astray, we are more likely to arrive at an approximation of the truth.


References (in addition to linked text above):

Finger, S. Origins of Neuroscience. New York, NY: Oxford University Press; 1994.

Lopez-Munoz, F., & Alamo, C. (2009). Monoaminergic Neurotransmission: The History of the Discovery of Antidepressants from 1950s Until Today Current Pharmaceutical Design, 15 (14), 1563-1586 DOI: 10.2174/138161209788168001

Valenstein, ES. Great and Desperate Cures: The Rise and Decline of Psychosurgery and other Radical Treatments for Mental Illness. New York, NY: Basic Books, Inc.; 1986.

If you enjoyed this article, check out:

Neuromyths and the disconnect between science and the public

Limitations of the consensus: How widely-accepted hypotheses can sometimes hinder understanding

The powerful influence of placebos on the brain

The term placebo effect describes an improvement in the condition of a patient after being given a placebo--an inert substance (e.g. sugar pill) the patient expects may hold some benefit for him. The placebo effect has long been recognized as an unavoidable aspect of medical treatment. Physicians before the 1950s often took advantage of this knowledge by giving patients treatments like bread pills or injections of water with the understanding that patients had a tendency to feel better when they were given something--even if it was inactive--than when they were given nothing at all. In the years following World War II, it became recognized that the placebo effect is more than just a medical curiosity--it is an extremely potent influence on patient psychology and physiology. With this realization came the determination that a condition where participants are given a placebo is a necessary component of an experiment designed to assess the effectiveness of a drug; for, if just the act of receiving treatment makes patients feel better, then that improvement must be subtracted from the overall strength of a drug's action to determine the true efficacy of the substance. This awareness led to the use of placebo conditions in clinical trials of pharmaceutical drugs being commonplace, and to the modern conception of the placebo effect as an important component of drug effects.

While many of us are aware of the use of placebos to test the effectiveness of drugs, we may be less likely to realize that some fraction of the benefit of any drug we take is likely due to the placebo effect. Because we expect the medications we take to help us feel better, they generally do to some extent; this influence is added to the efficacy of the mechanistic action of the drug to produce its overall effect. The magnitude of the contribution of the placebo effect can range from minor to the majority of the drug effect, depending on the medicine in question. Thus, the placebo effect in medicine is something that influences many of us on a daily basis, and all of us at some point or another.

The potency of the placebo effect

The magnitude of the placebo effect is often under-appreciated. Although placebos have no active ingredients, they have been shown to influence both psychology and physiology, and in some cases the effects of a placebo have been found to be stronger than the effects of the medication being compared against it. Placebos can improve quality of life, mitigate the burden of a disability, and--amazingly--have even been associated with decreased mortality. For example, in studies of patients with congestive heart failure, those who adhered to taking placebos regularly were 50% less likely to die than those who were in the placebo group but didn't adhere to taking their "medication." Those who faithfully took placebos were also less likely to experience cardiovascular events like stroke or heart attack.

The effects of placebos on a number of physiological systems have now been well documented. Placebos have been found to influence the activity of the autonomic nervous system, such as heart rate, gastrointestinal activity, and respiration. Placebos can elicit changes in hormone levels across various functional systems; effects have ranged from reducing stress hormone levels based on expectation alone to decreasing levels of appetite-stimulating hormones by convincing participants they had just eaten a very calorie-rich food (even though they hadn't). Researchers have even found that placebos can affect the activity of the immune system. In one study, investigators elicited immunosuppression as a placebo-induced response, and in another study it was found that watching advertisements for the antihistamine drug Claritin led to the drug being more effective than it was in participants who didn't receive any pro-drug messages.

Despite all of the experiments that have documented placebo effects, there is still a great deal to be learned about which neurobiological systems are necessary for creating the placebo effect. It may be that the neurobiology of the placebo effect is different depending on the type of stimulus or the expectancies involved. In other words, it is not clear if there is a group of brain regions and/or pathways that are activated whenever the placebo effect occurs--regardless of the circumstances--or if there are different regions activated depending on the context of the placebo administration. Recent research has used neuroimaging to attempt to unravel the mechanism underlying the placebo effect and, while the effect is complex and still poorly understood, these studies have provided some insight into which areas of the nervous system may be important to mediating it.

Neuroimaging of the placebo effect

Much of the experimental evidence regarding the placebo effect comes from studies of the impact of placebos on pain. This is due in part to the early recognition that the experience of pain is amenable to manipulation by the use of placebos. Pain is also useful to study because it is a ubiquitous problem that has relevance for clinical practice; additionally, we have a fairly good understanding of the components of the nervous system that are involved in pain sensations.

There are several brain regions that receive direct innervation from pathways that carry nociceptive (i.e. pain-related) information from the body to the brain. These include: the thalamus, through which pain signals must pass as they travel to the cortex; the somatosensory cortex, which is the cortical area where sensory signals from the body are initially processed; the insula, which is thought to be involved in mediating the intensity and emotional response to pain; and the anterior cingulate cortex, which is also believed to be involved in emotional responses to pain. Treatment with a placebo has been found to decrease activity in all of the above areas, and several studies have found that larger placebo responses were associated with a greater reduction of activity in these regions.

In addition to affecting these pain "centers" in the brain, placebos have also been found to activate pathways that travel down from the brainstem to the spinal cord to inhibit pain responses. The best known of these pathways runs from an area in the midbrain called the periaqueductal grey, down to the spinal cord. Activation of the periaqueductal grey can be initiated by a variety of cortical areas, and leads to increases in levels of natural painkillers known as endogenous opioids, which act to suppress pain. Endogenous opioids are part of an adaptive mechanism that allows us to tolerate pain, presumably to ensure we can extricate ourselves from an acutely dangerous situation before we become preoccupied with pain sensations.

Placebos can activate these descending pain modulatory pathways involving the periaqueductal grey to cause increases in levels of endogenous opioids. Some studies have found that increased activity in the periaqueductal grey is associated with the degree of placebo analgesia experienced. Additionally, administering a drug called naloxone that blocks the receptors where endogenous opioids normally exert their effect causes a decrease in placebo-induced analgesia. Thus, it seems that activity in the periaqueductal grey is an important component of placebo-induced pain relief. 

Placebos also affect activity in higher brain regions like the prefrontal cortex, amygdala, and striatum. Changes in activity in these areas may cause alterations in levels of endogenous opioids and/or may involve changes in affective and anticipatory states, which may influence the perception of pain. Connections between the prefrontal cortex and periaqueductal grey seem to be important for placebo analgesia, as placebos can cause increased activity in areas of the prefrontal cortex; this activity is associated with increased periaqueductal grey stimulation and endogenous opioid release. Placebo treatments also elevate levels of endogenous opioids in the amygdala, and reduce activity there. The role most commonly attributed to the amygdala involves the detection of threats in the environment and the generation of anxiety about those threats, and thus reduced activity in the amygdala may mitigate the anxiety-producing impact of pain. Placebo treatments also cause increases in both dopaminergic and endogenous opioid activity in the striatum. Dopamine activity in the striatum is generally associated with learning, motivation, and emotion; it has been hypothesized that the striatum may be involved in encoding information about the rewarding nature of pain relief and the aversive aspects of pain itself, and thus in the learning and behavior associated with pain avoidance.

Although the placebo effect has been explored most comprehensively in regards to pain, it is not confined to mitigating painful sensations; placebos have been found to affect experiences ranging from emotion to movement in Parkinson's disease. In many cases, the same systems discussed above in the context of pain are thought to be involved. For example, being given a placebo anti-anxiety medication led to decreased activity in the amygdala in response to a series of negative images; participants also rated the images as less unpleasant after taking the placebo. Studies in Parkinson's disease patients have found that taking a placebo that is expected to facilitate movement can cause increased dopamine levels in the striatum, which is associated with improvements in mobility.

Thus, it seems that the brain areas mentioned above may not be specific to the type of placebo effect explored, and may be part of some underlying neural circuitry that mediates the placebo effect in general. However, it is also likely true that we are just scratching the surface with the identification of these common areas. The full neural circuitry of the placebo effect is probably more complex than the collection of regions outlined above, and presumably includes a more intricate neurochemical basis than just endogenous opioids and dopamine. For example, recent research has identified roles for hormones like cholecystokinin and oxytocin in the placebo response as well.

Additionally, it is unclear if brain regions like the prefrontal cortex, which are activated in different types of placebo responses, are activated to serve the same purpose in each context. For example, in a pain context the prefrontal cortex may be involved in activating the periaqueductal grey; in a situation where someone is given a placebo anti-anxiety drug, however, the prefrontal cortex may be involved with regulation of areas like the amygdala. Thus, it is uncertain if this shared neural circuitry is actually working in the same manner in different placebo situations.

Research will therefore continue into the phenomenon of the placebo effect, for more than just the sake of curiosity. For, if we can learn more about the placebo effect and how it is mediated by the brain, we can use that knowledge to better predict which patients might be likely to experience a large placebo effect, and which would not. An ability to predict placebo response in patients could be immensely valuable, and could turn the placebo effect from a quirky aspect of medical care to something that can be directly manipulated in order to improve the effectiveness of treatment. And, while we may not return to the days of giving bread pills without consent, we may be able to better evaluate the efficacy of medications if we are able to better understand the contribution the placebo effect is having.

Wager, T., & Atlas, L. (2015). The neuroscience of placebo effects: connecting context, learning and health Nature Reviews Neuroscience, 16 (7), 403-418 DOI: 10.1038/nrn3976

Associating brain structure with function and the bias of more = better

It seems that, of all of the behavioral neuroscience findings that make their way into popular press coverage, those that involve structural changes to the brain are most likely to pique the interest of the public. Perhaps this is because we have a tendency to think of brain function as something that is flexible and constantly changing, and thus alterations in function do not seem as dramatic as alterations in structure, which give the impression of being more permanent.

After all, until relatively recently it was believed that we are born with a fixed number of neurons--and that was it. From the end of neural development through the rest of one's life it was thought that no new neurons were produced; thus, the inevitable occurrence of neuronal death ticked off a slow but inexorable decline into cognitive obscurity that we were helpless to prevent. We now know that this is not true, however, and that there are new neurons produced throughout one's lifespan in certain areas of the brain.

Regardless, perhaps that outdated thinking on the immutability of the brain causes people to be especially impressed by the mention of some activity changing the structure of the brain, because there is no shortage of articles in the popular press covering studies that involve changes to brain structure. Just since the beginning of 2015, there have been major news stories about methamphetamine, smoking, childhood neglect, and head trauma in professional fighters all leading to reductions in grey matter, as well a more positive story that music training in children can lead to increased grey matter in some areas of the cortex. And it seems like stories about meditation's ability to increase grey matter are constantly being recycled among blogs and popular news sites. In all of these stories it is either implied or stated without much supporting evidence that adding more grey matter to a part of the brain increases cognitive function, while losing grey matter decreases it. For example, in a story about smoking and its effects on the cortex, the reduction in grey matter was described in this way: "As the brain deteriorates, the neurons that once resided in each dying layer inevitably get subtracted from the overall total, impairing the organ’s function."

Of course, in neurodegenerative diseases like Alzheimer's disease, we know that loss of brain tissue corresponds to increasingly more severe deficits. But what do we know for sure about non-pathological changes in the structure/size of the brain, and how they are associated with function? Is it widely accepted that more grey matter is equivalent to improved function and less to diminished function? Contrary to what you might conclude if you read some recent news articles--and in many cases, the studies they summarize--on these subjects, we have not found a consistent formula for predicting how changes in structure will affect function. And so, it is not a universal rule that more is better when it comes to grey matter.

More brain mass = better brain function?

The hypothesis that larger brain size is associated with increased mental ability can be traced back at least to the ancient Greeks (possibly to the physician Erasistratus). It has had the support of a large share of the scientific community since the 1800s when some of the first formal experiments into the matter were conducted by Paul Broca. Broca recorded the weights of autopsied brains and found a positive association between education level and brain size. These findings would later be cited by Charles Darwin as he made a case for the large brain of humans as an evolved trait responsible for our superior intelligence compared to other species.

Indeed, the hominid brain has had a unique evolutionary trajectory. It is thought to have tripled in size about 1.5 million years ago, in the process creating a large disparity between our brains and those of our non-human primate relatives like the great apes. This rapid brain expansion, along with our highly developed cognitive abilities, has led many to suggest our brain growth was directly connected to increased capacities for intellect, language, and innovation. Humans don't have the biggest brains in the animal kingdom, however (that distinction belongs to sperm whales, which have brains that weigh about six times what ours do), but it is thought that the way the human brain grew--by adding more to cortical areas that are devoted to conceptual, rather than perceptual and motor processing--may have been responsible for our accelerated gains in intellectual ability.

Thus, while brain size has come to be considered an important indicator of the intelligence of humans in comparison to other species, it is thought that the cerebral cortex is really the defining feature of the human brain. The cortex makes up about 80% of our brain, a proportion that is much higher than that seen in many other mammalian species. The intricate folding and complex circuitry of the cerebral cortex may contribute to the greater intellectual capabilities we have compared to other species.

The advent of neuroimaging and its refinement over the last 15 years have allowed us to test in vivo the hypothesis that brain size among individuals corresponds to intelligence and, more specifically, that the thickness of the cerebral cortex is especially linked to cognitive abilities. Many of the results have supported these hypotheses. For example, a recent analysis of 28 neuroimaging studies found a significant average correlation of .40 between brain size and general mental ability. Another study of 6-18 year-olds in the United States found cortical thickness in several areas of the brain to be associated with higher scores on the Wechsler Abbreviated Scale of Intelligence. Choi et al. (2008) also found a positive correlation between cortical thickness and intelligence measures like Spearman's g; in addition they were able to explain 50% of the variance in IQ scores using estimates of cortical thickness in conjunction with functional magnetic resonance imaging data.

Recent studies have also allowed us to observe that brain size is not completely static, and that experience (as well as age) can alter the size or structure of certain parts of the brain. For example, one well-known study looked at the brains of London taxi drivers and found that taxi drivers had larger hippocampi compared to control subjects (hypothetically because the hippocampus is involved in spatial memory). The size of hippocampi was also correlated with time spent driving a taxi, suggesting it may have been the experience of navigating a cab through London that promoted the hippocampal changes, and not just that people with better spatial memory were more likely to become taxi drivers. In another study, participants underwent a brain scan, then spent a few months learning how to juggle, then had a second brain scan. The second scan showed increased grey matter in areas of the cortex associated with perceiving visual motion. When they stopped juggling, then had a third brain scan a few months later, the size of these areas had again decreased.

Based on such results, a number of studies have also looked at how certain activities might affect cortical thickness, with the assumption that increased cortical thickness represents an enhancement of ability. For example, an influential study published in 2005 by Lazar et al. examined the brains of 20 regular meditators (with an average of about 9 years of meditation experience) and compared the thickness of their cortices to that of people with no meditation experience. The investigators found that the meditators had increased grey matter in areas like the prefrontal cortex and insula; the authors interpreted this as being indicative of enhanced attentional abilities and a reduction of the effects of aging on the cortex, among other things.

Problems with the increased mass = improved function hypothesis

But there are a few problems with conclusions like those made in the meditation study conducted by Lazar et al. The first is that in the Lazar et al. study--and in many studies that look at brain structure and associate it with function--brain structure was only assessed at one point in time. The issue with only looking at brain structure once is that it doesn't allow one to determine if the structure changed before or after the experience. In other words, in the Lazar et al. study perhaps increased cortical thickness in prefrontal areas was associated with personality traits that made individuals more likely to enjoy meditating, instead of meditating causing the structure of the cerebral cortex to change.

Regardless, even if we knew the behavior (e.g. meditation) came before the structural changes, it still would be unclear how the structural modifications translated into changes in function. Postulating, as Lazar et al.did, that the changes may have been associated with an increased ability to sustain attention or be self-aware is a clear example of confirmation bias, as the authors are interpreting results in such a way as to support their hypothesis when truly they lack the evidence to do so. However, even if we agreed with the specious reasoning of Lazar et al. that "the right hemisphere is essential for sustaining attention" and thus structural changes to it are especially likely to affect attention, it is far from conclusive that increased cortical thickness in general represents increased function.

In fact, there are myriad examples in the literature that suggest there is not always a positive correlation between cortical thickness and cognitive ability. For example, a study involving patients with untreated major depressive disorder found they had increased cortical thickness in several areas of the brain. Another study detected increased cortical thickness in frontal, temporal, and parietal lobes in children and adolescents born with fetal alcohol spectrum disorder. A 2011 investigation involving binge drinkers observed thicker cortices in female binge drinkers, and thinner cortices in male binge drinkers. And a report examining the effects of education level on cortical thickness found those with higher education levels actually had thinner cortices. This is just a small sampling of the many studies indexed on PubMED that do not support the increased mass = increased function hypothesis. The reasons for this lack of consistent support could be numerous, ranging from confounds to measurement problems. But the important point is that there is no axiom that increased grey matter will result in improved functionality.

Of course this doesn't mean there is never an association between brain volume or cortical thickness with brain function. We should, however, be interpreting studies that identify such associations with caution. And we should be even more wary of popular press articles that summarize such studies, for when these studies are reported on by the media, the complexity of the association between structure and function--as well as the cross-sectional limitations of many of the studies--are rarely taken into consideration.

There are some methodological approaches to these types of studies that would allow investigators to improve their ability to make confident interpretations of the results. First, an emphasis should be placed on using longitudinal designs. In other words, an initial brain scan should be conducted first to get a baseline measure of brain structure. Then the activity (e.g. meditation) should be performed for a period of time before brain structure is assessed at least one more time. This reduces the weight that must be given to the concern that changes in structure predated changes in function. Additionally, it is best for some measure of function to be taken along with an assessment of brain structure to support any suggestion that the structural changes correspond to a functional change.

Many studies are already starting to utilize such approaches. For example, a recent study by Santarnecchi et al. examined the effects of mindfulness meditation on brain structure. The researchers conducted brain scans before and after an 8-week Mindfulness Based Stress Reduction (MBSR) program, using participants who had never meditated before. Participants also underwent psychological evaluations before and after the program. The participants experienced a reduction in anxiety, depression, and worry over the course of the MBSR course. But they also showed an increase in cortical thickness--in the right insula and somatosensory cortex. The Lazar et al. study noted an increase in volume of the right insula as well, suggesting there may be something to this finding. And because of the longitudinal design of the study by Santarnecchi et al., along with the fact that the participants were not regular meditators, we can have more confidence that the structural changes seen came as a result of--instead of predated--the meditation practice.

As can be seen from this example where, despite the methodological problems of the Lazar et al. study, some of the results were later supported, a too liberal interpretation of such results does not suggest the findings are baseless. If we are going to have confidence in the value of associations between structure and function, however, more emphasis must be placed on using study designs that allow one to infer causality. And, when it comes to the interpretation of results, a conservative approach should be advocated.

Lazar, S., Kerr, C., Wasserman, R., Gray, J., Greve, D., Treadway, M., McGarvey, M., Quinn, B., Dusek, J., Benson, H., Rauch, S., Moore, C., & Fischl, B. (2005). Meditation experience is associated with increased cortical thickness NeuroReport, 16 (17), 1893-1897 DOI: 10.1097/01.wnr.0000186598.66243.19

Detecting lies with fMRI

In 2006, a company called No Lie MRI began advertising their ability to detect "deception and other information stored in the brain" using functional magnetic resonance imaging (fMRI). They were not the first to make this claim. Two years prior, a company called Cephos had been founded on the same principle. Both companies were launched by entrepreneurs who hoped to one day replace the polygraph machine and its recognized shortcomings with a foolproof approach to lie detection.

Within several years after the establishment of these companies, their services were used on multiple occasions to obtain fMRI data to submit as evidence in court--in both criminal and civil cases. In each instance the court declined to admit the evidence, but the decision to exclude it was carefully deliberated. In one 39-page opinion written to recommend excluding the evidence in a Medicare fraud case, the judge indicated that fMRI evidence might one day be admissible even if it hadn't been determined to be an extraordinarily reliable method of lie detection. Thus, it seems that the use of neuroimaging for lie detection may be a real possibility at some point in the future. But how realistic is it right now?

Designing studies to test fMRI-based lie detection

To be able to evaluate the current state of the research regarding fMRI-based lie detection, it's important to have an understanding of how studies in this area have been designed and conducted. The goal of these studies has been to identify differences in the brain activity of participants when they are telling the truth vs. when they are lying. Thus, participants are usually instructed to lie when asked about certain topics and told to answer other questions truthfully. The brain activity observed during the truthful responses is compared to brain activity observed during deceptive responses in an attempt to isolate areas that are active only during prevarication.

For example, in one study participants were directed to "steal" one of two items: a watch or a ring. Then, the subjects were put into an fMRI machine and questioned about the mock thievery, but they were told to deny taking anything. The subjects were asked about both items, but because they had only taken one of the two, some of their responses naturally were lies and others were the truth. Researchers then compared the participants' brain activity during truthful and deceitful responses.

Other studies vary considerably in the details, but have a similar general approach. In one experiment, participants were given two playing cards and then told to deny having one of them when asked about it. In another example, participants picked a number between three and eight and then were told to deny picking their chosen number when it was shown to them. Thus, most of these experiments involve "directed deception," where a participant is told to lie about a particular experience while their brain activity is being observed.

What the studies have shown

The collection of studies investigating the neural correlates of deception has grown considerably over the years, leading to a number of candidate regions being singled out as potentially playing a role in deception. In a recent publication, Farah et al. conducted a meta-analysis of these studies to attempt to identify brain structures that had consistently been activated during lying. The investigators analyzed 23 studies which, as a whole, described a total of 321 foci of interest.

Farah et al. found significant variability among the results of the studies, and no one region was activated in all of them. However, a number of areas were generally more likely to be active during deception, including parts of the prefrontal cortex, inferior parietal lobule, anterior insula, and the medial superior frontal cortex.

These commonly-activated areas may provide us with some clues as to where we should be looking for the neural correlates of deception. To have confidence in fMRI for lie detection, however, it will be important to see consistent activation of a particular network of areas from study to study. Because the studies included in this meta-analysis all differed slightly in experimental design, it is not surprising they resulted in slightly different patterns of brain activity. Even if with further studies we see a more consistent pattern of activation, however, can we be confident it is representative of lying? Some would argue that we cannot.

Problems with fMRI-based lie detection research

Beyond the lack of uniform results in laboratory studies of deception, there are a number of other hurdles associated with using fMRI for lie detection. One major difficulty is in determining that the brain activity we see occurring during deception is specific only to lying. For example, if participants are instructed to steal an item and then are asked to lie about stealing it later, the increased activity observed when they are asked about the item could be associated with deception. But, it could also be that their unique personal experience with the item (e.g. having "chosen" to steal it as opposed to one of the other items) causes a different pattern of activity that is representative of familiarity. In other words, different memory systems may be accessed for items that one has more knowledge of, and the activation of these networks could be responsible for the differences in brain activity observed in deception vs. sincerity conditions.

A study conducted by Hakun et al. serves as a good example of this problem. The investigators in this experiment asked participants to choose a number out of a series of numbers. Then, the participants were put in an fMRI machine and questioned about the number chosen; half of the participants were told beforehand to lie about which number they picked while the other half were asked to remain silent. Some of the areas identified in the meta-analysis discussed above that seem to be activated during deception were activated in both groups of participants (half of whom were not lying--nor even talking). Thus, it may be that activation in these areas is not specific to deception, but involved with memory retrieval, attention, or some other aspect of higher-order processing.

Another issue with using fMRI for lie detection involves the real-world applicability of the studies in this area. For example, when you ask someone to lie about a number they chose from a list or even about a mock crime, the lie they tell will generally not have a great deal of emotional significance for them. In real-life, however, lying is often associated with high emotionality, stress, anxiety, etc. Thus, it is unclear if the brain regions identified in fMRI studies of lie detection are only likely to be activated during the more dispassionate type of deception that occurs under laboratory conditions. If so, it could mean that other areas of the brain are more likely to be activated during "real" lies, which could suggest that fMRI lie detection based on current data might overlook activity often associated with lies in the real world.

Additionally, most of these fMRI studies are conducted on healthy participants (often college undergraduate students). It seems likely that these individuals may display different brain responses to lying than, for example, a sample of participants with a criminal past (a population this technology might be expected to work with if it were to have validity in the legal realm). For instance, one fMRI study conducted with people who had a criminal history and a diagnosis related to psychopathy found a different pattern of brain activity during instructed lying than that seen in other populations. Thus, brain activity during deception may differ from individual to individual depending on other personality characteristics, personal background, etc. Until we can be more certain about which areas of the brain are activated in everyone when they lie, it is difficult to have much confidence in the use of fMRI to detect deception.

One other potential problem--which applies to using any method to attempt to detect lying--is that we must be aware of countermeasures the subject may use to evade detection. Countermeasures are actions an individual might be able to take to disrupt the lie detection device's ability to accurately identify dishonesty. For example, some have asserted that slightly painful actions like biting the tongue during control questions in polygraph tests can raise physiological responses during those responses, making baseline activity too high to be able to detect the deviations from it that occur during true deception. Countermeasures to disrupt fMRI-based lie detection are still relatively unknown, but studies suggest they may be fairly simple to implement. In one study experimenters were able to detect lying in participants with 100% accuracy. However, when they asked participants to wiggle their fingers and toes "imperceptibly" during the fMRI scanning the accuracy was reduced to only 33%.

A long way to go

Thus, despite the optimism that led some to invest in businesses devoted primarily to fMRI-based lie detection, it seems the method still has a long way to go before it can be considered valid and reliable. Consequently, it is not something we are likely to see gain widespread acceptance anytime soon. But that does not mean it is not a future possibility. As our methods of neuroimaging improve and we develop better ways of identifying specific structures that are necessarily activated during certain behaviors, highly accurate neuroimaging for lie detection may emerge, bringing with it a collection of ethical dilemmas about how it should be used. However, such developments are likely decades away; for now the knowledge about the lies we tell will remain safely sequestered in our own heads.

Farah, M., Hutchinson, J., Phelps, E., & Wagner, A. (2014). Functional MRI-based lie detection: scientific and societal challenges Nature Reviews Neuroscience DOI: 10.1038/nrn3589

Using neuroimaging to expose the unconscious influences of priming

In 1996, a group of researchers at NYU conducted an interesting experiment. First, they had NYU students work on unscrambling letters to form words. Unbeknownst to the students, they had been split up into three groups, and each group unscrambled letters that formed slightly different words. One group unscrambled words with a "rude" connotation like aggressively, bold, and interrupt. Another group unscrambled "polite" words like considerate, patiently, and respect. And the third group unscrambled neutral words like watches and normally.

The students were told they should come find the experimenter, who would be waiting in a different room, after they finished the unscrambling task. This, however, was just another part of the experiment. When the students walked up to the experimenter, he was engaged in a conversation with someone else (who was actually in on the experiment). The experimenter stood in such a way that it was clear he knew the student was waiting for him, but he nevertheless continued his conversation and didn't acknowledge the student.

In fact, the experimenter continued talking for 10 minutes unless the student interrupted to draw attention to the fact that he or she was done with the unscrambling task (and being somewhat rudely ignored). What the experiment really had set out to determine was if the type of words the students unscrambled seemed to have an influence on whether or not they interrupted the experimenter. Interestingly, about 80% of students who unscrambled polite words waited a full 10 minutes without interrupting, while only 35% of the students who unscrambled rude words waited that long. On average the rude-word group only waited 5.4 minutes, compared to the polite-word group's 8.7 minutes. The students, of course, were not aware that the words they unscrambled had any effect on their patience, or lack thereof.

Now, think about the implications of this experiment in your daily life. If its findings are valid--and it's worth noting that this particular area of research has been criticized for the publication of studies that others have been unable to replicate--it suggests that information that we are not consciously aware of shapes our thoughts and behavior. Taken a step further, we could begin to question how much of our behavior is even under our own conscious control. For example, you might swear that fight you got into with your significant other was about doing the dishes and it never would have happened if he/she hadn't blatantly disregarded your strong opinions--yet again--about leaving dirty dishes in the sink. But maybe your inclination towards hostility had been influenced by that jerk who cut you off in traffic an hour prior, causing you to overreact negatively and call it quits on an a relationship that was pretty good despite a lack of harmony on the relatively minor issue of timely dish washing.

The influence a previous experience has on our likelihood of responding in a particular way later on is known as priming. It was first discovered in the 1970s through a series of simple experiments exploring response time in tasks like determining if groups of letters represented English words. For example, in one such experiment researchers presented participants with pairs of words. Sometimes the words used were actual English words (e.g. butter), other times they were nonsense words (e.g. nart), and they were presented in different combinations of each. The researchers found that participants were able to identify something as an English word more rapidly if the word presented previous to it had a related meaning (e.g. the first word was nurse and the second was doctor). Since then, a number of experiments have investigated this effect a previous experience can have on a subsequent response, showing that it can influence everything from reaction time to subtleties of behavior like the speed at which someone walks.

Priming and memory

Priming is considered an an example of implicit memory, a term that describes a type of memory that can influence behavior even though we aren't consciously aware of it. We use a form of implicit memory called procedural memory every day when we engage in tasks that we have performed countless other times, like tying our shoes. In these cases we don't consciously think of the process involved in doing the job (often we are thinking of something quite different), but clearly we retain a memory of how to perform the task, and that memory facilitates its execution.

The influence of priming extends much further than shoe-tying, however. Although it may be difficult for us to accept, our implicit memory seems to affect the beliefs we hold and the decisions we make. Because our brains are so good at forming connections between things we see around us and things we have seen or learned in the past, our implicit memory is being accessed on a continuous basis. For example, in another study researchers put participants in two groups: one group filled out a questionnaire in a room that smelled strongly of citrus all-purpose cleaner, while the other filled out a questionnaire in a room with no apparent odor. Then, the researchers had both groups eat a crumbly biscuit. The group that had been exposed to the citrus smell was significantly more likely to clean up the crumbs from their biscuit. Even though they weren't consciously thinking about it, the citrus scent (hypothetically) conjured up implicit associations with cleanliness, which prompted the participants to clean up after themselves.

Priming and the brain

Understanding the neuroscientific correlates of priming has not been simple, in part because it seems to involve a diverse selection of brain areas. One general finding has been that there is a reduction in brain activity during exposure to a primed stimulus (i.e. a stimulus that has been preceded by priming) vs. an unprimed stimulus. For example, if you prime someone by exposing them to words related to transportation, then ask them to unscramble letters that could readily be formed into words like traffic or drive, you will see less activity in their brains than if you hadn't primed them. This should make intuitive sense, as the brain that has been primed is not having to work as hard. It can rely on cues from implicit memory to bring to mind potential words the letters might form.

One reason we tend to see many brain regions involved in priming is that different systems are used to process different types of stimuli--as well as different aspects of the same stimulus. For example, if the primed stimulus involved the meaning of a word, then we would see a decreased response to the primed stimulus in a number of areas of the brain associated with processing different aspects of a word, like meaning, spelling, phonology, and so on. If the primed stimulus involved an odor, we would see a reduction in brain activity in very different regions.  

There are also, however, some commonalities in the neural activity underlying priming across different types of stimuli. For example, regions of the inferior temporal cortex and inferior frontal gyrus have been found to respond to abstract qualities of stimuli, and thus they are activated even when the prime and the primed stimulus are presented in different ways. One study, for instance, saw activity in these areas when the prime involved normally-oriented words and the primed stimulus involved mirror-reversed words. The inferior temporal cortex and inferior frontal gyrus are also activated in response to primed stimuli of different perceptual modalities (e.g. auditory and visual), and they are still activated when the prime and the primed stimulus are each presented in a different modality. Thus, it may be that areas like these mediate the priming of concepts, regardless of how the stimulus is introduced and initially processed.

Neuroimaging evidence also suggests that the prefrontal cortex may play an especially important role in priming, as it is another area where activity is reduced in response to a number of different types of primed stimuli. The prefrontal cortex is frequently associated with executive functions, and as such it is involved in managing the activity of a network of brain areas in retrieving memories and handling other cognitive duties. Having an implicit memory to draw upon, however, may make its job a little easier, allowing the prefrontal cortex to work more efficiently to complete the task at hand. Thus, reduced activity in the prefrontal cortex during exposure to a primed stimulus may generally represent a decreased reliance on the conscious processing of a stimulus due to the contributions of implicit memory.

These are some of the patterns of brain activity that we can associate with priming, but what's really going on still remains somewhat unclear. Regardless, priming seems to be a process that is occurring in our minds all the time. And this feature of the way our brains work is also opportunistically manipulated by others every day, especially those who are trying to sell you something. Take Cinnabon, for example, the baked goods chain known for their American-sized cinnamon rolls. They intentionally place their ovens near the front of their stores so the smell of fresh-baked rolls drifts toward the entrance. Because they strategically put their stores primarily in enclosed buildings like malls and airports, these drifting aromas are more likely to be smelled by those who are passing by. Some of these passersby will then make an impulsive decision to stop in and buy a cinnamon roll. Is this decision free of any priming effects induced by the enticing odors emanating from the store? It's hard to say, but considering that Cinnabon's sales were lower when they tried putting the ovens at the back of stores, it would seem priming is playing a role in many decisions to indulge.

Even your desire to read this article might be able to be traced back to some priming influence that occurred earlier today, last week, or last year. The same could be said for my intention to write it. All of this leads to the inevitable question: with these multitudinous influences on our behavior from unconscious associations with words, sounds, smells, colors, etc., how much of our behavior do we really control? Don't think too hard about trying to answer that question, because you've likely already been primed to respond in a particular way.

Schacter, D., Wig, G., & Stevens, W. (2007). Reductions in cortical activity during priming Current Opinion in Neurobiology, 17 (2), 171-176 DOI: 10.1016/j.conb.2007.02.001