In 1935, an ambitious neurology professor named Egas Moniz sat in the audience at a symposium on the frontal lobes, enthralled by neuroscientist Carlyle F. Jacobsen's description of some experiments Jacobsen had conducted with fellow investigator John Fulton. Jacobsen and Fulton had damaged the frontal lobes of a chimpanzee named "Becky," and afterwards they had observed a considerable behavioral transformation. Becky had previously been stubborn, erratic, and difficult to train, but post-operation she became placid, imperturbable, and compliant.
Moniz had already been thinking about the potential therapeutic value of frontal lobe surgery in humans after reading some papers about frontal lobe tumors and how they affected personality. He believed that some mental disorders were caused by static abnormalities in frontal lobe circuitry. By removing a portion of the frontal lobes, he hypothesized he would also be removing neurons and pathways that were problematic, in the process alleviating the patient's symptoms. Although Moniz had been pondering this possibility, Jacobsen's description of the changes seen in Becky was the impetus Moniz needed to try a similar approach with humans. He did so just three months after seeing Jacobsen's presentation, and the surgical procedure that would come to be known as the frontal lobotomy was born.
Moniz's procedure initially involved drilling two holes in a patient's skull, then injecting pure alcohol subcortically into the frontal lobes, with the hopes of destroying the regions where the mental disorder resided. Moniz soon turned to another tool for ablation, however---a steel loop he called a leucotome (which is Greek for "white matter knife")---and began calling the procedure a prefrontal leucotomy. Although his means of assessing the effectiveness of the procedure were inadequate by today's standards---for example, he generally only monitored patients for a few days after the surgery---Moniz reported recovery or improvement in most of the patients who underwent the procedure. Soon, prefrontal leucotomies were being done in a number of countries throughout the world.
The operation attracted the interest of neurologist Walter Freeman and neurosurgeon James Watts. They modified the procedure again, this time to involve entering the skull from the side using a large spatula. Once inside the cranium, the spatula was wiggled up and down in the hopes of severing connections between the thalamus and prefrontal cortex (based on the hypothesis that these connections were crucial for emotional responses, and could precipitate a disorder when not functioning properly). They also renamed the procedure "frontal lobotomy," as leucotomy implied only white matter was being removed and that was not the case with their method.
Several years later (in 1946), Freeman made one final modification to the procedure. He advocated for using the eye socket as an entry point to the frontal lobes (again to sever the connections between the thalamus and frontal areas). As his tool to do the ablation, he chose an ice pick. The ice pick was inserted through the eye socket, wiggled around to do the cutting, and then removed. The procedure could be done in 10 minutes; the development of this new "transorbital lobotomy" brought about the real heyday of lobotomy.
The introduction of transorbital lobotomy led to a significant increase in the popularity of the operation---perhaps due to the ease and expediency of the procedure. Between 1949 and 1952, somewhere around 5,000 lobotomies were conducted each year in the United States (the total number of lobotomies done by the 1970s is thought to have been between 40,000 and 50,000). Watts strongly protested the transformation of lobotomy into a procedure that could be done in one quick office visit---and done by a psychiatrist instead of a surgeon, no less---which caused he and Freeman to sever their partnership.
Freeman, however, was not discouraged; he became an ardent promoter of transorbital lobotomy. He traveled across the United States, stopping at mental asylums to perform the operation on any patients who seemed eligible and to train the staff to perform the surgery after he had moved on. Freeman himself is thought to have performed or supervised around 3,500 lobotomies; his patients included a number of minors and a 4-year old child (who died 3 weeks after the procedure).
Eventually, however, the popularity of transorbital lobotomy began to fade. One would like to think that this happened because people recognized how barbaric the procedure was (along with the fact that the approach was based on somewhat flimsy scientific rationale). The real reasons for abandoning the operation, however, were more pragmatic. The downfall of lobotomy began with some questions about the effectiveness of the surgery, especially in treating certain conditions like schizophrenia. It was also recognized that some types of cognition like motivation, spontaneity, and abstract thought suffered irreparably from the procedure. And the final nail in the coffin of lobotomy was the development of psychiatric drugs like chlorpromazine, which for the first time gave clinicians a pharmacological option for intractable cases of mental disorders.
It is easy for us now to look at the practice of lobotomy as nothing short of brutality, and to scoff at what seems like a tenuous scientific explanation for why the procedure should work. It's important, however, to look at such issues in the history of science in the context of their time. In an age when effective psychiatric drugs were nonexistent, psychosurgical interventions were viewed as the "wave of the future." They offered a hopeful possibility for treating disorders that were often incurable and potentially debilitating. And while the approach of lobotomy seems far too non-selective (meaning such serious brain damage was not likely to affect just one mental faculty) to us now, the idea that decreasing frontal lobe activity might reduce mental agitation was actually based on the available scientific literature at the time.
Still, it's clear that the decision to attempt to treat psychiatric disorders through inflicting significant brain damage represented a failure of logic at multiple levels. When we discuss neuroscience today, we often assume that our days of such egregious mistakes are over. And while we have certainly progressed since the time of lobotomies (especially in the safeguards protecting patients from such untested and dangerous treatments), we are not that far removed temporally from this sordid time in the history of neuroscience. Today, there is still more unknown about the brain than there is known, and thus it is to be expected that we continue to make significant mistakes in how we think about brain function, experimental methods in neuroscience, and more.
Some of these mistakes may be due simply to a natural human approach to understanding difficult problems. For example, when we encounter a complex problem we often first attempt to simplify it by devising some straightforward way of describing it. Once a basic appreciation is reached, we add to this elementary knowledge to develop a more thorough understanding---and one that is more likely to be a better approximation of the truth. However, that overly simplistic conceptualization of the subject can give birth to countless erroneous hypotheses when used in an attempt to explain something as intricate as neuroscience. And in science, these types of errors can lead a field astray for years before it finds its way back on course.
Other mistakes involve research methodology. Due to the rapid technological advances in neuroscience that have occurred in the past half-century, we have some truly amazing neuroscience research tools available to us that would have only been science fiction 100 years ago. Excitement about these tools, however, has caused researchers in some cases to begin utilizing them extensively before we are fully prepared to do so. This has resulted in using methods that cannot yet answer the questions we presume they can, and has provided us with results that we are sometimes unable to accurately interpret. In accepting the answers we obtain as legitimate and assuming our interpretations of results are valid, we may commit errors that can confound hypothesis development for some time.
Advances in neuroscience in the 20th and into the 21st century have been nothing short of mind-boggling, and our successes in understanding far outpace our long-standing failures. However, any scientific field is rife with mistakes, and neuroscience is no different. In this article, I will discuss just a few examples of how missteps and misconceptions continue to affect progress in the field of neuroscience.
The ________________ neurotransmitter
Nowadays the fact that neurons use signaling molecules like neurotransmitters to communicate with one another is one of those pieces of scientific knowledge that is widely known even to non-scientists. Thus, it may be a bit surprising that this understanding is less than 100 years old. It was in 1921 that the German scientist Otto Loewi first demonstrated that, when stimulated, the vagus nerve releases a chemical substance that can affect the activity of nearby cells. Several years later, that substance was isolated by Henry Dale and determined to be acetylcholine (at that point a substance that had already been identified, just not as a neurotransmitter). It wasn't until the middle of the 20th century, however, that it became widely accepted that neurotransmitters were used throughout the brain. The discovery of other neurotransmitters and neuropeptides would be scattered throughout the second half of the 20th century.
Of course, each time a new neurotransmitter (or other signaling molecule like a neuropeptide) is discovered, one of the first questions scientists want to answer is "what is its role in the brain?" The approach to answering this question generally involves some degree of simplification, as researchers seem to search for one overriding function that can be used to describe the neurotransmitter. Resultantly, often the first really intriguing function discovered for a neurotransmitter becomes a way of defining it.
Gradually, the discovered functions for the neurotransmitter become so diverse that it is no longer rational to attach one primary role to it, and researchers are forced to revise their initial explanations of the neurotransmitter's function by incorporating new discoveries. Sometimes it is later found that the original function linked to the neurotransmitter does not even match up well with the tasks the chemical is actually responsible for in the brain. The idea that the neurotransmitter has one main function, however, can be difficult to convince people to forget. This becomes a problem because that inaccurate conceptualization may lead to years of research seeking evidence to support a particular role for the neurotransmitter, while that hypothesized role may be misunderstood---or outright erroneous.
The neuropeptide oxytocin provides a good example case of this phenomenon. The history of oxytocin begins with the same Henry Dale mentioned above. In 1906, Dale found that oxen pituitary gland extracts could speed up uterine contractions when administered to a variety of mammals including cats, dogs, rabbits, and rats. This discovery soon led to the exploration of the use of similar extracts to assist in human childbirth; it was found they could be especially helpful in facilitating labor that was progressing slowly. Its effects on childbirth would be where oxytocin earned its name, which is derived from the Greek words for "quick birth."
Clinical use of oxytocin didn't become widespread until researchers were able to synthesize oxytocin in the laboratory. But after that occurred in the 1950s, oxytocin became the most commonly used agent to induce labor throughout the world (sold under the trade names Pitocin and Syntocinon). However, despite the fact that oxytocin plays such an important role in a significant percentage of pregnancies today, the vast majority of research and related news on oxytocin in the past few decades has involved very different functions for the hormone: love, trust, and social bonding.
This line of research can be traced back to the 1970s when investigators learned that oxytocin reached targets throughout the brain, suggesting it might play a role in behavior. Soon after, researchers found that oxytocin injections could prompt virgin female rats to exhibit maternal behaviors like nest building. Researchers then began exploring oxytocin's possible involvement in a variety of social interactions ranging from sexual behavior to aggression. In the early 1990s, some discoveries of oxytocin's potential contribution to forming social bonds emerged from an uncommon species to use as a research subject: the prairie vole.
The prairie vole is a small North American rodent that looks kind of like a cross between a gopher and a mouse. They are somewhat unremarkable animals except for one unusual feature of their social lives: they form what seem to be primarily monogamous long-term relationships with voles of the opposite sex. This is not common among mammals, as estimates are that only somewhere between less than 3% to less than 5% of mammalian species display evidence of monogamy.
A monogamous rodent species creates an interesting opportunity to study monogamy in the laboratory. Researchers learned that female prairie voles begin to display a preference for a male---a preference that can lead to a long-term attachment---after spending just 24 hours in the same cage as the male. It was also observed that administration of oxytocin made it more likely females would develop this type of preference for a male vole, and treatment with an oxytocin antagonist made it less likely. Thus, oxytocin became recognized as playing a crucial part in the formation of heterosexual social bonds in the prairie vole---a discovery that would help to launch a torrent of research into oxytocin's involvement in social bonding and other prosocial behaviors.
When researchers then turned from rodents like prairie voles to attempt to understand the role oxytocin might play in humans, research findings that suggested oxytocin acted to promote positive emotions and behavior in people began to accumulate. Administration of oxytocin, for example, was found to increase trust. People with higher levels of oxytocin were observed to display greater empathy. Oxytocin administration was discovered to make people more generous and to promote faithfulness in long-term relationships. One study even found that petting a dog was associated with increased oxytocin levels---in both the human and the dog. Due to the large number of study results indicating a positive effect of oxytocin on socialization, the hormone earned a collection of new monikers including the love hormone, the trust hormone, and even the cuddle hormone.
Excited by all of these newfound social roles for oxytocin, researchers eagerly---and perhaps impetuously---began to explore the role of oxytocin deficits in psychiatric disorders along with the possibility of correcting those deficits with oxytocin administration. One disorder that has gained a disproportionate amount of attention in this regard is autism spectrum disorder, or autism. Oxytocin deficits seemed to be a logical explanation for autism since social impairment is a defining characteristic of the disorder, and oxytocin appeared to promote healthy social behavior. As researchers began to delve into the relationship between oxytocin levels in the blood and autism, however, they did not find what seemed to be a direct relationship. Undeterred, investigators explored the intranasal administration of oxytocin---which involves spraying the neuropeptide into the nasal passages---on symptoms in autism patients. And initially, there were indications intranasal oxytocin might be effective at improving autism symptoms (more on this below).
Soon, however, some began to question if all of the excitement surrounding the "trust hormone" had caused researchers to make hasty decisions regarding experimental design, for all of the studies using the intranasal method of oxytocin administration were using a method that wasn't---and still has not been---fully validated. Researchers turned to the intranasal method of administration because oxytocin that enters the bloodstream does not appear to cross the blood-brain barrier in appreciable amounts; there are indications, however, that the neuropeptide does make it into the brain via the intranasal route. The problem, however, is that even by intranasal delivery very little oxytocin reaches the brain---according to one estimate only .005% of the administered dose. Even when very high doses are used, the amount that reaches the brain via intranasal delivery does not seem comparable to the amount of oxytocin that must be administered directly into the brain (intracereboventricularly) of an animal to influence behavior.
But many studies have indicated an effect, so what is really going on here? One possibility is that the effects are not due to the influence of oxytocin on the central nervous system, but due to oxytocin entering the blood stream and interacting with the large number of oxytocin receptors in the peripheral nervous system; if true, this would mean that exogenous oxytocin is not having the effects on the brain researchers have hypothesized. Another more concerning possibility is that many of the studies published on the effects of intranasal oxytocin suffer from methodological problems like questionable statistical approaches to analyzing data.
Indeed, criticisms of the statistical methods of some of the seminal papers in this field have been made publicly. A recent review also found that studies of intranasal oxytocin often involve small sample sizes; significant findings of small studies are more likely to be statistical aberrations and not representative of true effects. It is also probable that the whole area of research is influenced by publication bias, which is the tendency to publish reports of studies that observe significant results while neglecting to publish reports of studies that fail to see any notable effects. This may seem like a necessary evil, as journal readers are more likely to be interested in learning about new discoveries than experiments that yielded none. Ignoring non-significant findings, though, can lead to the exaggeration of the importance of an observed effect because the available literature may seem to indicate no conflicting evidence (even though such evidence might exist hidden away in the file drawers of researchers throughout the world).
These potential issues are underscored by the inconsistent research results and failed attempts at replicating, or repeating, studies that have reported significant effects of intranasal oxytocin. For example, one of the most influential studies on intranasal oxytocin, which found that oxytocin increases trust, has failed to replicate several times. And in many cases, null research findings have emerged after initial reports indicated a significant effect. The findings of the early autism studies mentioned above, for example, have been contradicted by multiple randomized controlled trials (see here and here) conducted in the last few years, which reported a lack of a significant therapeutic effect.
Not surprisingly, over the years the simple understanding of oxytocin as a neuropeptide that promotes positive emotions and behavior has also become more complicated as it was learned that the effects of oxytocin might not always be so rosy. In one study, for example, researchers observed intranasal oxytocin to be associated with increased envy and gloating. Another study found oxytocin increased ethnocentrism, or the tendency to view one's own ethnicity or culture as superior to others. And in a recent study, intranasal administration of oxytocin increased aggressive behavior. To add even more complexity to the picture, the effects of oxytocin may not be the same in men and women and may even be disparate in different individuals and different environmental contexts.
In an attempt to explain these discordant findings, researchers have proposed new interpretations of oxytocin's role in social behavior. One hypothesis, for example, suggests that oxytocin is involved in promoting responsiveness to any important social cue---whether it be positive (e.g. smiling) or negative (e.g. aggression); this is sometimes called the "social salience" hypothesis. Despite such recent efforts to reconcile the seemingly contradictory findings in oxytocin research, however, there is still not a consensus as to the effects of oxytocin, and the hypothesis that oxytocin is involved in positive social behavior continues to guide the majority of the research in this area.
Thus, for years now oxytocin research has centered on a role for the neuropeptide that is at best sensationalized and at worst deeply flawed. And oxytocin is only the most recent example of this phenomenon. In the 1990s, dopamine earned a reputation as the "pleasure neurotransmitter." Soon after, serotonin became known as the "mood neurotransmitter." These appellations were based on the most compelling discoveries linked to these neurotransmitters: dopamine is involved in processing rewarding stimuli, and serotonin is targeted by treatments for depression.
However, now that we know more about these substances, it is clear these short definitions of functionality are much too simplistic. Not only are dopamine and serotonin involved in much more than reward and mood, respectively, but also the roles of these two neurotransmitters in reward and mood seem to be very complicated and poorly understood. For example, most researchers no longer think dopamine signaling causes pleasure, but that it is involved with other intricacies of memorable experiences like the identification of important stimuli in the environment---whether they be positive (i.e. rewarding) or negative. Likewise, that serotonin levels alone don't determine mood is now common knowledge in scientific circles (and is finding its way into public perception as well). Thus, these short, easy-to-remember titles are misleading---and somewhat useless.
In assigning one function to one neurotransmitter or neuropeptide, we overlook important facts like the understanding that these neurochemicals often have multiple receptor subtypes they act at, sometimes with drastically different effects. And we neglect to consider that different areas of the brain have different levels of receptors for each neurochemical, and may be preferentially populated with one receptor subtype over another---leading to different patterns of activity in different brain regions with different functional specializations. Add to that all of the downstream effects of receptor activation (which can vary significantly depending on the receptor subtype, area of the brain it is found, etc.), and you have an extremely convoluted picture. Trying to sum it up in one function is ludicrous.
Not only do these simplifying approaches hinder a more complete understanding of the brain, they lead to the wasting of countless hours of research and dollars of research funding pursuing the confirmation of ideas that will likely have to be replaced eventually with something more elaborate. Regardless, this type of simplification in science does seem to serve a purpose. Our brains gravitate towards these straightforward ways of explaining things---possibly because without some comprehensible framework to start from, understanding something as complex as the brain seems like a Herculean task. However, if we are going to utilize this approach, we should at least do it with more awareness of our tendency to do so. By recognizing that, when it comes to the brain, the story we are telling is almost always going to be much more complicated than we are inclined to believe, perhaps we can avoid committing the mistakes of oversimplification we have made in the past.
Psychotherapeutic drugs and the deficits they correct
Before the 1950s, the treatment of psychiatric disorders looked very different from how it does today. As discussed above, unrefined neurosurgery like a transorbital lobotomy was considered a viable approach to treating a variety of ailments ranging from panic disorder to schizophrenia. But lobotomy was only one of a number of potentially dangerous interventions used at the time that generally did little to improve the mental health of most patients. Pharmacological treatments were not much more refined, and often involved the use of agents that simply acted as strong sedatives to make a patient's behavior more manageable.
The landscape began to change dramatically in the 1950s, however, when a new wave of pharmaceutical drugs became part of psychiatric treatment. The first antipsychotics for treating schizophrenia, the first antidepressants, and the first benzodiazepines to treat anxiety and insomnia were all discovered in this decade. Some refer to the 1950s as the "golden decade" of psychopharmacology, and the decades that followed as the "psychopharmacological revolution," since over this time the discovery and development of psychiatric drugs would progress exponentially; soon pharmacological treatments would be the preferred method of treating psychiatric illnesses.
The success of new psychiatric drugs over the second half of the twentieth century came as something of a surprise because the disorders these drugs were being used to treat were still poorly understood. Thus, drugs were often discovered to be effective through a trial and error process, i.e. test as many substances as we can and eventually maybe we'll find one that treats this condition. Because of how little was understood about the biological causes of these disorders, if a drug with a known mechanism was found to be effective in treating a disorder with an unknown mechanism, often it led to a hypothesis that the disorder must be due to a disruption in the system affected by the drug.
Antidepressants serve as a prime example of this phenomenon. Before the 1950s, a biological understanding of depression was essentially nonexistent. The dominant perspective of the day on depression was psychoanalytic---depression was caused by internal conflicts among warring facets of one's personality, and the conflicts were generally considered to be created by the internalization of troublesome or traumatic experiences that one had gone through earlier in life. The only non-psychoanalytic approaches to treatment involved poorly understood and generally unsuccessful procedures like electroconvulsive shock therapy---which was actually effective in certain cases, but potentially dangerous in others---and treatments like barbiturates or amphetamines, which didn't seem to target anything specific to depression but instead caused widespread sedation or stimulation, respectively.
The first antidepressants were discovered serendipitously. The story of iproniazid, one of the first drugs marketed specifically for depression---the other being imipramine, which underwent discovery and the first clinical uses at around the same time as iproniazid---is a good example of the serendipity involved. In the early 1950s, researchers were working with a chemical called hydrazine and investigating its derivatives for anti-tuberculosis properties (tuberculosis was a scourge at the time and it was routine to test any chemical for its potential to treat the disease). Interestingly, hydrazine derivatives may never have even been tested if the Germans hadn't used hydrazine as rocket fuel during World War II, causing large surpluses of the substance to be found at the end of the war and then sold to pharmaceutical companies on the cheap.
In 1952, a hydrazine derivative called iproniazid was tested on tuberculosis patients in Sea View Hospital on Staten Island, New York. Although the drug did not seem to be superior to other anti-tuberculosis agents in treating tuberculosis, a strange side effect was noted in these preliminary trials: patients who took iproniazid displayed increased energy and significant improvements in mood. One researcher reported that patients were "dancing in the halls 'tho there were holes in their lungs." Although at first largely overlooked as "side effects" of iproniazid treatment, eventually researchers became interested in the mood-enhancing effect of the drug in and of itself; before the end of the decade, the drug was being used to treat patients with depression.
Around the same time as the discovery of the first antidepressant drugs, a new technique called spectrophotofluorimetry was being developed. This technique allowed researchers to detect changes in the levels of neurotransmitters called monoamines (e.g. dopamine, serotonin, norepinephrine) after the administration of drugs (like iproniazid) to animals. Spectrophotofluorimetry allowed researchers to determine that iproniazid and imipramine were having an effect on monamines. Specifically, the administration of these antidepressants was linked to an increase in serotonin and norepinephrine levels.
This discovery led to the first biological hypothesis of depression, which suggested that depression is caused by deficiencies in levels of serotonin and/or norepinephrine. At first, this hypothesis focused primarily on norepinephrine and was known as the "noradrenergic hypothesis of depression." Later, however---due in part to the putative effectiveness of antidepressant drugs developed to more specifically target the serotonin system---the emphasis would be placed more on serotonin's role in depression, and the "serotonin hypothesis of depression" would become the most widely accepted view of depression.
The serotonin hypothesis would go on to be endorsed not only by the scientific community, but also---thanks in large part to the frequent referral to a serotonergic mechanism in pharmaceutical ads for antidepressants---by the public at large. It would guide drug development and research for years. As the serotonin hypothesis was reaching its heyday, however, researchers were also discovering that it didn't seem to tell the whole story of the etiology of depression.
A number of problems with the serotonin hypothesis were emerging. One was that antidepressant drugs took weeks to produce a therapeutic benefit, but their effects on serotonin levels seemed to occur within hours after administration. This suggested that, at the very least, some mechanism other than increasing serotonin levels was involved in the therapeutic effects of the drugs. Other research that questioned the hypothesis began to accumulate as well. For example, experimentally depleting serotonin in humans was not found to cause depressive symptoms.
There is now a long list of experimental findings that question the serotonin and noradrenergic hypotheses (indeed, the area of research is muddied even further by evidence suggesting antidepressant drugs may not even be all that effective). Clearly, changes in monoamine levels are an effect of most antidepressants, but it does not seem that there is a direct relationship between serotonin or norepinephrine levels and depression. At a minimum, there must be another component to the mechanism.
For example, some have proposed that increases in serotonin levels are associated with the promotion of neurogenesis (the birth of new neurons) in the hippocampus, which is an important brain region for the regulation of the stress response. But recently researchers have also begun to deviate significantly from the serotonin hypothesis, suggesting bases for depression that are different altogether. One more recent hypothesis, for instance, focuses on a role for the glutamate system in the occurrence of depression.
The serotonin hypothesis of depression is just one of many hypotheses of the biological causes of psychiatric disorders that were formulated based on the assumption that the primary mechanism of a drug that treats the disorder must be correcting the primary dysfunction that causes the disorder. The same logic was used to devise the dopamine hypothesis of schizophrenia. And the low arousal hypothesis of attention-deficit hyperactivity disorder. Both of these hypotheses were at one point the most commonly touted explanations for schizophrenia and ADHD, respectively, but now are generally considered too simplistic (at least in their original formulations).
The logic used to construct such hypotheses is somewhat tautological: drug A increases B and treats disorder C, thus disorder C is caused by a deficiency in B. It neglects to recognize that B may be just one factor influencing some downstream target, D, and thus the effects of the drug may be achieved in various ways, which B is just one of. It fails to appreciate the sheer complexity of the nervous system, and the multitudinous factors that likely are involved in the onset of psychiatric illness. These factors include not just neurotransmitters, but also hormones, genes, gene expression, aspects of the environment, and an extensive list of other possible influences. The complexity of psychiatry likely means there are an almost inconceivable number of ways for a disorder like depression to develop, and our understanding of the main pathways involved is likely still at a very rudimentary level.
Thus, when we simplify such a complex issue to rest primarily upon the levels of one neurotransmitter, we are making a similar type of mistake as discussed in the first section of this article, but perhaps with even greater repercussions. For the errors that result from simplifying psychiatric disorders into "one neurotransmitter" maladies are errors that affect not just progress in neuroscience, but also the mental and physical health of patients suffering from these disorders. Many of these patients are prescribed psychiatric drugs with the assumption their disorder is simple enough to be fixed by adjusting some "chemical imbalance"; perhaps it is not surprising then that psychiatric drugs are ineffective in a surprisingly large number of patients. And many patients continue taking such drugs---sometimes with minimal benefit---despite experiencing significant side effects. Thus, there is even more imperative in this area to move away from searching for simple answers based on known mechanisms and venture out into more intimidating and unknown waters.
Our faith in functional neuroimaging
As approaches to creating images of brain activity like positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) were developed in the second half of the twentieth century, they understandably sparked a great deal of excitement among neuroscientists. These methods allowed neuroscience to achieve something once thought to be impossible---the ability to see what was happening in the brain in (close to) real time. By monitoring cerebral blood flow using a technique like fMRI, one can tell which brain areas are receiving the most blood---and by extension which areas are most active neuronally---when someone is performing some action (e.g. completing a memory task, thinking about a loved one, viewing pictures of rewarding or aversive stimuli, etc.).
This method of neuroimaging, which finally allowed researchers to draw conclusions about elusive connections between structure and function, was dubbed functional neuromaging. Functional neuroimaging methods have predictably become some of the most popular research tools in neuroscience over the last few decades. fMRI surpassed PET as a preferred tool for functional neuroimaging soon after it was developed (due to a variety of factors including better spatial resolution and a less invasive approach), and it has been the investigative method of choice in over 40,000 published studies since the 1990s.
The potential of functional neuroimaging---and fMRI in particular---to unlock countless secrets of the brain intrigued not only investigators but also the popular press. The media quickly realized that the results of fMRI studies could be simplified, combined with some colorful pictures of brain scans, and sold to the public as representative of huge leaps in understanding which parts of the brain are responsible for certain behaviors or patterns of behaviors. The simplification of these studies led to incredible claims that intricate patterns of behavior and emotion like religion or jealousy emanated primarily from one area of the brain.
Fortunately, this wave of sensationalism has died down a bit as many neuroscientists have been vocal about how this type of oversimplification is taken so far that it propagates untruths about the brain and misrepresents the capabilities of functional neuroimaging. The argument against oversimplifying fMRI results, though, is often an argument against oversimplification itself. The assumption is that the methodology is not flawed, but the interpretation is. More and more researchers, however, are asserting that not only are the reported results of neuroimaging experiments ripe for misinterpretation, but also they are often simply inaccurate.
One major problem with functional neuroimaging involves how the data from these experiments are handled. In fMRI, for example, the device creates a representation of the brain by dividing an image of the brain into thousands of small 3-D cubes called voxels. Each voxel can represent the activity of over a million neurons. Researchers must then analyze the data to determine which voxels are indicative of higher levels of blood flow, and these results are used to determine which areas of the brain are most active. Most of the brain is active at all times, however, so researchers must compare activity in each voxel to activity in that voxel during another task to determine if blood flow in a particular voxel is higher during the task they are interested in.
Due to the sheer volume of data, an issue arises with the task of deciding whether blood flow observed in a particular voxel is representative of activity above a baseline. Each fMRI image can consist of anywhere from 40,000 to 500,000 voxels, depending on the settings of the machine, and each experiment involves many images (sometimes thousands), each taken a couple of seconds apart. This creates a statistical complication called the multiple comparisons problem, which essentially states that if you perform a large number of tests, you are more likely to find one significant result simply by chance than if you performed just one test.
For example, if you flip a coin ten times it would be highly unlikely you would get tails nine times. But, if you flipped 50,000 coins ten times, you would be much more likely to see that result in at least one of the coins. That coin flip result is what, in experimental terms, we would call a false positive. If you're using a typical coin, getting nine tails out of ten flips doesn't tell you something about the inherent qualities of the coin---it's just a statistical aberration that occurred by chance. The same type of thing is more likely to occur when a researcher makes the sometimes millions of comparisons (between active and baseline voxels) involved with an fMRI study. By chance alone, it's likely some of them will appear to indicate a significant level of activity.
This problem was exemplified through an experiment conducted by a group of researchers in 2009 that involved an fMRI scan of a dead Atlantic salmon (yes, the fish). The scientists put the salmon in an fMRI scanner and showed the fish a collection of pictures depicting people engaged in different social situations. They went so far as to ask the salmon---again, a dead fish---what emotion the people in the photographs were experiencing. When the researchers analyzed their data without correcting for the multiple comparisons problem, they observed the miraculous: the dead fish appeared to display brain activity that indicated it was "thinking" about the emotions being portrayed in the photographs. Of course this wasn't what was really going on; instead, it was that the false positives emerging due to the multiple comparisons problem made it appear as if there was real activity occurring in the fish's brain when obviously there was not.
The salmon experiment shows how serious a concern the multiple comparisons problem can be when it comes to analyzing fMRI data. The problem, however, is a well-known issue by now, and most researchers correct for it in some way when statistically analyzing their neuroimaging data. Even today, however, not all do---a 2012 review of 241 fMRI studies found that the authors of 41% of them did not report making any adjustments to account for the multiple comparisons problem. Even when conscious attempts to avoid the multiple comparisons problem are made, though, there is still a question of how effective they are at producing reliable results.
For example, one method for dealing with the multiple comparisons problem that has become popular among fMRI researchers is called clustering. In this approach, only when clusters of contiguous voxels are active together is there enough cause to consider a region of the brain more active than baseline. Part of the rationale here is that if a result is legitimate, it is more likely to involve aggregates of active voxels, and so by focusing on clusters instead of individual voxels one can reduce the likelihood of false positives.
The problem with clustering is that it doesn't always seem to work that well. For example, a study published this year analyzed fMRI data from close to 500 subjects using three of the most popular fMRI software packages and found that one common approach to clustering still led to a false positive rate of up to 70%. So, even when researchers take pains to account for the multiple comparisons problem, the results often don't seem to inspire confidence that the effect observed is real and not just a result of random fluctuations in brain activity.
This isn't to say that no fMRI data should be trusted, or that fMRI shouldn't be used to explore brain activity. Rather, it suggests that much more care needs to be taken to ensure fMRI data is managed properly to avoid making erroneous conclusions. Unfortunately, however, the difficulties with fMRI don't begin and end with the multiple comparisons problem. Many fMRI studies also suffer from small sample sizes. This makes it more difficult to detect a true effect, and when some effect is observed it also means it is more likely to be a false positive. Additionally, it means that when a true effect is observed, the size of the effect is more likely to be exaggerated. Some researchers have also argued that neuroimaging research suffers from publication bias, which further inflates the importance of any significant findings because conflicting evidence may not be publicly available.
All in all, this suggests the need for more caution when it comes to conducting and interpreting the results of fMRI experiments. fMRI is an amazing technology that offers great promise in helping us to better understand the nervous system. However, functional neuroimaging is a relatively young field, and we are still learning how to properly use techniques like fMRI. It's to be expected, then---as with any new technology or recently developed field---that there will be a learning curve as we develop an appreciation for the best practices in how to obtain data and interpret results. Thus, while we continue to learn these things, we should use considerable restraint and a critical eye when assessing the results of functional neuroimaging experiments.
Progress in neuroscience over the past several centuries has changed our understanding of what it means to be human. Over that time, we learned that the human condition is inextricably connected to this delicate mass of tissue suspended in cerebrospinal fluid in our cranium. We discovered that most afflictions that affect our behavior originate in that tissue, and then we started to figure out ways to manipulate brain activity---through the administration of various substances both natural and man-made---to treat those afflictions. We developed the ability to observe activity in the brain as it occurs, making advances in understanding brain function that humans were once thought to be incapable of. And there are many research tools in neuroscience that are still being refined, but which hold the promise of even greater breakthroughs over the next 50 years.
The mistakes made along the way are to be expected. As a discipline grows, the accumulation of definitive knowledge does not follow a straight trajectory. Rather it involves an accurate insight followed by fumbling around in the dark for some time before making another truthful deduction. Neuroscience is no different. Although we have a tendency to think highly of our current state of knowledge in the field, chances are at any point in time it is still infested with errors. The goal is not to achieve perfection in that sense, but simply to remain cognizant of the impossibility of doing so. By recognizing that we never know as much as we think we know, and by frequently assessing which approaches to understanding are leading us astray, we are more likely to arrive at an approximation of the truth.
References (in addition to linked text above):
Finger, S. Origins of Neuroscience. New York, NY: Oxford University Press; 1994.
Valenstein, ES. Great and Desperate Cures: The Rise and Decline of Psychosurgery and other Radical Treatments for Mental Illness. New York, NY: Basic Books, Inc.; 1986.
If you enjoyed this article, check out:
When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.
The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.
While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.
While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.
Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.
Alterations in neurotransmitter systems
Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.
As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression.
Abnormalities in the stress response
Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.
In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.
Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.
One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.
Glial cell abnormalities
While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.
Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.
As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.
Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.
If you asked any self-respecting neuroscientist 25 years ago what causes depression, she would likely have only briefly considered the question before responding that depression is caused by a monoamine deficiency. Specifically, she might have added, in many cases it seems to be caused by low levels of serotonin in the brain. The monoamine hypothesis that she would have been referring to was first formulated in the late 1960s, and at that time was centered primarily around norepinephrine. But in the decades following the birth of the monoamine hypothesis, its focus shifted to serotonin, in part due to the putative success of antidepressant drugs that targeted the serotonin transporter (e.g. selective serotonin reuptake inhibitors, or SSRIs). The monoamine/serotonin hypothesis eventually became generally recognized as viable by the scientific community. Interestingly, it also became widely accepted by the public, who were regularly exposed to television commercials for antidepressant drugs like Prozac, Lexapro, and Celexa--drugs whose commercials specifically mentioned a serotonin imbalance as playing a role in depression.
Over the years, however, the scientific method quietly and efficiently went to work. Evidence gradually accumulated that indicated that the serotonin hypothesis does a very inadequate job of explaining depression. For example, although SSRIs increase serotonin levels within hours after drug administration, if their administration leads to beneficial effects--a big if--it usually takes 2-4 weeks of daily administration for those effects to appear. One would assume that if serotonin levels were causally linked to depression, then soon after serotonin levels increased, mood would begin to improve. Also, reducing levels of serotonin in the brain does not cause depression. The list of studies that don't fully support the serotonin hypothesis of depression is actually quite lengthy, and most of the scientific community now agrees that the hypothesis is insufficient as a standalone explanation of depression.
In the 1990s another hypothesis, known as the neurogenic hypothesis, was proposed with the hopes of filling in some of the holes in the etiology of depression that the monoamine hypothesis seemed to be unable to fill. The neurogenic hypothesis suggests that depression is at least partially caused by an impairment of the brain's ability to produce new neurons, a process known as neurogenesis. Specifically, researchers have focused on neurogenesis in the hippocampus, one of the only areas in the brain where neurogenesis has been observed in adulthood (the other being the subventricular zone).
The neurogenic hypothesis was formulated based on several observations. First, depressed patients seem to have smaller hippocampi than the general population, and their hippocampi also appear to be smaller during periods of depression than during periods of remission. Second, glucocorticoids like cortisol are elevated in depression, and glucocorticoids appear to inhibit neurogenesis in the hippocampus in rodents and non-human primates. Finally, there is evidence that the chronic administration of antidepressants increases neurogenesis in the hippocampus in rodents.
The neurogenic hypothesis thus suggests that depression is associated with a reduction in the birth of new neurons in the hippocampus, an area of the brain important to stress regulation, cognition, and mood. According to this hypothesis, when someone takes antidepressants, the drugs do raise levels of monoamines like serotonin, but they also enact long-term processes that increase neurogenesis in the hippocampus. This neurogenesis is hypothesized to be a crucial part of the reason antidepressants work, and the fact that it takes some time for hippocampal neurogenesis to return to normal may help to explain why antidepressants take several weeks to have an effect.
This may all sound logical, but the neurogenic hypothesis has its own share of problems. For example, while stress-related impairment of neurogenesis has been observed in rodents, we don't have definitive evidence it occurs in humans. Human studies thus far have relied on comparing the size of the hippocampi in depressed and non-depressed patients. While smaller hippocampi have been observed in depressed individuals, it is not clear that this is due to reduced neurogenesis rather than some other type of structural changes that might have occurred during depression.
Similarly, while the administration of antidepressants has been associated with increased neurogenesis in rodent models of stress, we don't have clear evidence of this in humans. In humans we again have to rely on looking at things like hippocampal size. Because there could be a number of explanations for changes in the size of the hippocampi, we can't assume neurogenesis is the sole factor involved--or that it is involved at all. Additionally, some studies in rodents have found that antidepressants lead to a reduction in anxiety or depressive symptoms in the absence of increased hippocampal neurogenesis.
Another problem is that when neurogenesis is experimentally decreased in rodents, the animals don't usually display depressive symptoms. Experiments of this type haven't been performed with humans or non-human primates, so we don't know if a reduction in neurogenesis in any species is actually sufficient to cause depression. And no studies have found that increasing neurogenesis alone is enough to alleviate depressive-like symptoms.
Of course none of this means the neurogenic hypothesis is incorrect, but it does suggest there is a long way to go before we can feel confident about incorporating it fully into our understanding of depression. In the reluctance of the scientific community to embrace this hypothesis is where I see the beauty of science. Although it took decades of testing and revising before the monoamine hypothesis became a widely accepted explanation for depression, one could argue (based on its now recognized shortcomings) that we accepted it too readily.
However, it seems that many in the scientific community have learned from that mistake. Although there is no shortage of publications whose authors may be too willing to anoint the neurogenic hypothesis as a new unifying theory of depression, overall the tone when speaking of the neurogenic hypothesis seems to be cautious and/or critical. There is also a great deal of discussion now in the literature about the complexity of mood disorders like depression, and how it is unlikely to be able to explain their manifestation in a diverse population of individuals with just one mechanism, whether it be impaired neurogenesis or a serotonin deficiency.
Thus, the neurogenic hypothesis will require much more testing before we can consider it an important piece in the puzzle of depression. Even if further testing supports it, however, it will likely be considered just that--a piece in the puzzle, instead of an overarching explanation of the disorder. And that circumspect approach to explaining depression represents an important advancement in the way we look at psychiatric disorders.
People have been eating psychedelic mushrooms since ancient times. There are even indications--although they are impossible to verify--that psychedelic mushrooms played an important role in cultures like the Mayan civilizations of South America thousands of years ago. Of course, the use of "magic" mushrooms has continued into the present day, but it wasn't until 1958 that Albert Hofmann (the discoverer of LSD) isolated psilocybin as the active hallucinogenic compound in psychedelic mushrooms.
Recently, psilocybin has also received some recognition as a potential treatment for anxiety. For example, a pilot study conducted in 2011 explored the ability of psilocybin to reduce anxiety in individuals with advanced stage cancer. Although it was a small study and only exploratory in nature, it suggested that psilocybin could have some benefit in reducing anxiety and improving mood in patients with a terminal illness.
In a study due to be published soon in Biological Psychiatry, a research group in Switzerland explored a potential mechanism for reduced anxiety after psilocybin administration. The authors, Kraehenmann et al., administered psilocybin or placebo to a group of participants. Then, they monitored the participants' brain activity using functional magnetic resonance imaging (fMRI) while the subjects completed a task that generally increases activation in an area of the brain called the amygdala. The task involved viewing a series of pictures; half of the pictures presented negative stimuli like a car accident, and the other half presented neutral pictures like everyday objects or scenes from daily life.
The amygdala is an almond-shaped collection of nuclei in the temporal lobe (there are actually two amygdalae--one in each hemisphere). Increased activity in the amygdala has been associated with emotional reactions, and especially with fear and anxiety. Hyperactivity in the amygdala has also been observed in depressed patients, and treatment with selective serotonin reuptake inhibitors (SSRIs) has been found to reduce that hyperactivity. This suggests that increased activity in the amygdala may also play a role in symptoms of depression.
Kraehenmann et al. found that psilocybin administration improved mood and decreased anxiety, which was to be expected (magic mushrooms acquired their moniker for a reason). The study, however, also offered some insight into what might be causing that reduction in anxiety. After taking psilocybin (as compared to placebo), activity in the right amygdala was reduced while viewing negative images, and activity in the left amygdala was decreased in response to both negative and neutral images.
Psilocybin is thought to act as an agonist at serotonin receptors, meaning it increases serotonin transmission. Thus, it may be that antidepressants like SSRIs that act on serotonin--at least as part of their mechanism--have something in common with psilocybin. And, it suggests that perhaps psilocybin should continue to be investigated for its antidepressant and anxiolytic (anti-anxiety) properties.