2-Minute Neuroscience: Selective Serotonin Reuptake Inhibitors (SSRIs)

SSRIs are the most widely-used treatment for depression, and have been since their introduction to the market in the late 1980s. They were formulated based on the hypothesis that depression is caused by low levels of the neurotransmitter serotonin. In this video, I discuss how SSRIs work along with some questions that have been raised about the serotonin hypothesis since the introduction of SSRIs.

What are we getting wrong in neuroscience?

In 1935, an ambitious neurology professor named Egas Moniz sat in the audience at a symposium on the frontal lobes, enthralled by neuroscientist Carlyle F. Jacobsen's description of some experiments Jacobsen had conducted with fellow investigator John Fulton. Jacobsen and Fulton had damaged the frontal lobes of a chimpanzee named "Becky," and afterwards they had observed a considerable behavioral transformation. Becky had previously been stubborn, erratic, and difficult to train, but post-operation she became placid, imperturbable, and compliant. 

Moniz had already been thinking about the potential therapeutic value of frontal lobe surgery in humans after reading some papers about frontal lobe tumors and how they affected personality. He believed that some mental disorders were caused by static abnormalities in frontal lobe circuitry. By removing a portion of the frontal lobes, he hypothesized he would also be removing neurons and pathways that were problematic, in the process alleviating the patient's symptoms. Although Moniz had been pondering this possibility, Jacobsen's description of the changes seen in Becky was the impetus Moniz needed to try a similar approach with humans. He did so just three months after seeing Jacobsen's presentation, and the surgical procedure that would come to be known as the frontal lobotomy was born.

Moniz's procedure initially involved drilling two holes in a patient's skull, then injecting pure alcohol subcortically into the frontal lobes, with the hopes of destroying the regions where the mental disorder resided. Moniz soon turned to another tool for ablation, however---a steel loop he called a leucotome (which is Greek for "white matter knife")---and began calling the procedure a prefrontal leucotomy. Although his means of assessing the effectiveness of the procedure were inadequate by today's standards---for example, he generally only monitored patients for a few days after the surgery---Moniz reported recovery or improvement in most of the patients who underwent the procedure. Soon, prefrontal leucotomies were being done in a number of countries throughout the world. 

The operation attracted the interest of neurologist Walter Freeman and neurosurgeon James Watts. They modified the procedure again, this time to involve entering the skull from the side using a large spatula. Once inside the cranium, the spatula was wiggled up and down in the hopes of severing connections between the thalamus and prefrontal cortex (based on the hypothesis that these connections were crucial for emotional responses, and could precipitate a disorder when not functioning properly). They also renamed the procedure "frontal lobotomy," as leucotomy implied only white matter was being removed and that was not the case with their method. 

Several years later (in 1946), Freeman made one final modification to the procedure. He advocated for using the eye socket as an entry point to the frontal lobes (again to sever the connections between the thalamus and frontal areas). As his tool to do the ablation, he chose an ice pick. The ice pick was inserted through the eye socket, wiggled around to do the cutting, and then removed. The procedure could be done in 10 minutes; the development of this new "transorbital lobotomy" brought about the real heyday of lobotomy.

The introduction of transorbital lobotomy led to a significant increase in the popularity of the operation---perhaps due to the ease and expediency of the procedure. Between 1949 and 1952, somewhere around 5,000 lobotomies were conducted each year in the United States (the total number of lobotomies done by the 1970s is thought to have been between 40,000 and 50,000). Watts strongly protested the transformation of lobotomy into a procedure that could be done in one quick office visit---and done by a psychiatrist instead of a surgeon, no less---which caused he and Freeman to sever their partnership.

Freeman, however, was not discouraged; he became an ardent promoter of transorbital lobotomy. He traveled across the United States, stopping at mental asylums to perform the operation on any patients who seemed eligible and to train the staff to perform the surgery after he had moved on. Freeman himself is thought to have performed or supervised around 3,500 lobotomies; his patients included a number of minors and a 4-year old child (who died 3 weeks after the procedure). 

Eventually, however, the popularity of transorbital lobotomy began to fade. One would like to think that this happened because people recognized how barbaric the procedure was (along with the fact that the approach was based on somewhat flimsy scientific rationale). The real reasons for abandoning the operation, however, were more pragmatic. The downfall of lobotomy began with some questions about the effectiveness of the surgery, especially in treating certain conditions like schizophrenia. It was also recognized that some types of cognition like motivation, spontaneity, and abstract thought suffered irreparably from the procedure. And the final nail in the coffin of lobotomy was the development of psychiatric drugs like chlorpromazine, which for the first time gave clinicians a pharmacological option for intractable cases of mental disorders. 

It is easy for us now to look at the practice of lobotomy as nothing short of brutality, and to scoff at what seems like a tenuous scientific explanation for why the procedure should work. It's important, however, to look at such issues in the history of science in the context of their time. In an age when effective psychiatric drugs were nonexistent, psychosurgical interventions were viewed as the "wave of the future." They offered a hopeful possibility for treating disorders that were often incurable and potentially debilitating. And while the approach of lobotomy seems far too non-selective (meaning such serious brain damage was not likely to affect just one mental faculty) to us now, the idea that decreasing frontal lobe activity might reduce mental agitation was actually based on the available scientific literature at the time.

Still, it's clear that the decision to attempt to treat psychiatric disorders through inflicting significant brain damage represented a failure of logic at multiple levels. When we discuss neuroscience today, we often assume that our days of such egregious mistakes are over. And while we have certainly progressed since the time of lobotomies (especially in the safeguards protecting patients from such untested and dangerous treatments), we are not that far removed temporally from this sordid time in the history of neuroscience. Today, there is still more unknown about the brain than there is known, and thus it is to be expected that we continue to make significant mistakes in how we think about brain function, experimental methods in neuroscience, and more.

Some of these mistakes may be due simply to a natural human approach to understanding difficult problems. For example, when we encounter a complex problem we often first attempt to simplify it by devising some straightforward way of describing it. Once a basic appreciation is reached, we add to this elementary knowledge to develop a more thorough understanding---and one that is more likely to be a better approximation of the truth. However, that overly simplistic conceptualization of the subject can give birth to countless erroneous hypotheses when used in an attempt to explain something as intricate as neuroscience. And in science, these types of errors can lead a field astray for years before it finds its way back on course.

Other mistakes involve research methodology. Due to the rapid technological advances in neuroscience that have occurred in the past half-century, we have some truly amazing neuroscience research tools available to us that would have only been science fiction 100 years ago. Excitement about these tools, however, has caused researchers in some cases to begin utilizing them extensively before we are fully prepared to do so. This has resulted in using methods that cannot yet answer the questions we presume they can, and has provided us with results that we are sometimes unable to accurately interpret. In accepting the answers we obtain as legitimate and assuming our interpretations of results are valid, we may commit errors that can confound hypothesis development for some time.

Advances in neuroscience in the 20th and into the 21st century have been nothing short of mind-boggling, and our successes in understanding far outpace our long-standing failures. However, any scientific field is rife with mistakes, and neuroscience is no different. In this article, I will discuss just a few examples of how missteps and misconceptions continue to affect progress in the field of neuroscience.

The ________________ neurotransmitter

Nowadays the fact that neurons use signaling molecules like neurotransmitters to communicate with one another is one of those pieces of scientific knowledge that is widely known even to non-scientists. Thus, it may be a bit surprising that this understanding is less than 100 years old. It was in 1921 that the German scientist Otto Loewi first demonstrated that, when stimulated, the vagus nerve releases a chemical substance that can affect the activity of nearby cells. Several years later, that substance was isolated by Henry Dale and determined to be acetylcholine (at that point a substance that had already been identified, just not as a neurotransmitter). It wasn't until the middle of the 20th century, however, that it became widely accepted that neurotransmitters were used throughout the brain. The discovery of other neurotransmitters and neuropeptides would be scattered throughout the second half of the 20th century.

Of course, each time a new neurotransmitter (or other signaling molecule like a neuropeptide) is discovered, one of the first questions scientists want to answer is "what is its role in the brain?" The approach to answering this question generally involves some degree of simplification, as researchers seem to search for one overriding function that can be used to describe the neurotransmitter. Resultantly, often the first really intriguing function discovered for a neurotransmitter becomes a way of defining it.

Gradually, the discovered functions for the neurotransmitter become so diverse that it is no longer rational to attach one primary role to it, and researchers are forced to revise their initial explanations of the neurotransmitter's function by incorporating new discoveries. Sometimes it is later found that the original function linked to the neurotransmitter does not even match up well with the tasks the chemical is actually responsible for in the brain. The idea that the neurotransmitter has one main function, however, can be difficult to convince people to forget. This becomes a problem because that inaccurate conceptualization may lead to years of research seeking evidence to support a particular role for the neurotransmitter, while that hypothesized role may be misunderstood---or outright erroneous. 

The neuropeptide oxytocin provides a good example case of this phenomenon. The history of oxytocin begins with the same Henry Dale mentioned above. In 1906, Dale found that oxen pituitary gland extracts could speed up uterine contractions when administered to a variety of mammals including cats, dogs, rabbits, and rats. This discovery soon led to the exploration of the use of similar extracts to assist in human childbirth; it was found they could be especially helpful in facilitating labor that was progressing slowly. Its effects on childbirth would be where oxytocin earned its name, which is derived from the Greek words for "quick birth." 

Clinical use of oxytocin didn't become widespread until researchers were able to synthesize oxytocin in the laboratory. But after that occurred in the 1950s, oxytocin became the most commonly used agent to induce labor throughout the world (sold under the trade names Pitocin and Syntocinon). However, despite the fact that oxytocin plays such an important role in a significant percentage of pregnancies today, the vast majority of research and related news on oxytocin in the past few decades has involved very different functions for the hormone: love, trust, and social bonding.

This line of research can be traced back to the 1970s when investigators learned that oxytocin reached targets throughout the brain, suggesting it might play a role in behavior. Soon after, researchers found that oxytocin injections could prompt virgin female rats to exhibit maternal behaviors like nest building. Researchers then began exploring oxytocin's possible involvement in a variety of social interactions ranging from sexual behavior to aggression. In the early 1990s, some discoveries of oxytocin's potential contribution to forming social bonds emerged from an uncommon species to use as a research subject: the prairie vole.

prairie voles.  Image courtesy of thenerdpatrol, (prairie voles omsi).

prairie voles. Image courtesy of thenerdpatrol, (prairie voles omsi).

The prairie vole is a small North American rodent that looks kind of like a cross between a gopher and a mouse. They are somewhat unremarkable animals except for one unusual feature of their social lives: they form what seem to be primarily monogamous long-term relationships with voles of the opposite sex. This is not common among mammals, as estimates are that only somewhere between less than 3% to less than 5% of mammalian species display evidence of monogamy

A monogamous rodent species creates an interesting opportunity to study monogamy in the laboratory. Researchers learned that female prairie voles begin to display a preference for a male---a preference that can lead to a long-term attachment---after spending just 24 hours in the same cage as the male. It was also observed that administration of oxytocin made it more likely females would develop this type of preference for a male vole, and treatment with an oxytocin antagonist made it less likely. Thus, oxytocin became recognized as playing a crucial part in the formation of heterosexual social bonds in the prairie vole---a discovery that would help to launch a torrent of research into oxytocin's involvement in social bonding and other prosocial behaviors.

When researchers then turned from rodents like prairie voles to attempt to understand the role oxytocin might play in humans, research findings that suggested oxytocin acted to promote positive emotions and behavior in people began to accumulate. Administration of oxytocin, for example, was found to increase trust. People with higher levels of oxytocin were observed to display greater empathy. Oxytocin administration was discovered to make people more generous and to promote faithfulness in long-term relationships. One study even found that petting a dog was associated with increased oxytocin levels---in both the human and the dog. Due to the large number of study results indicating a positive effect of oxytocin on socialization, the hormone earned a collection of new monikers including the love hormone, the trust hormone, and even the cuddle hormone.

Excited by all of these newfound social roles for oxytocin, researchers eagerly---and perhaps impetuously---began to explore the role of oxytocin deficits in psychiatric disorders along with the possibility of correcting those deficits with oxytocin administration. One disorder that has gained a disproportionate amount of attention in this regard is autism spectrum disorder, or autism. Oxytocin deficits seemed to be a logical explanation for autism since social impairment is a defining characteristic of the disorder, and oxytocin appeared to promote healthy social behavior. As researchers began to delve into the relationship between oxytocin levels in the blood and autism, however, they did not find what seemed to be a direct relationship. Undeterred, investigators explored the intranasal administration of oxytocin---which involves spraying the neuropeptide into the nasal passages---on symptoms in autism patients. And initially, there were indications intranasal oxytocin might be effective at improving autism symptoms (more on this below).

Soon, however, some began to question if all of the excitement surrounding the "trust hormone" had caused researchers to make hasty decisions regarding experimental design, for all of the studies using the intranasal method of oxytocin administration were using a method that wasn't---and still has not been---fully validated. Researchers turned to the intranasal method of administration because oxytocin that enters the bloodstream does not appear to cross the blood-brain barrier in appreciable amounts; there are indications, however, that the neuropeptide does make it into the brain via the intranasal route. The problem, however, is that even by intranasal delivery very little oxytocin reaches the brain---according to one estimate only .005% of the administered dose. Even when very high doses are used, the amount that reaches the brain via intranasal delivery does not seem comparable to the amount of oxytocin that must be administered directly into the brain (intracereboventricularly) of an animal to influence behavior.

But many studies have indicated an effect, so what is really going on here? One possibility is that the effects are not due to the influence of oxytocin on the central nervous system, but due to oxytocin entering the blood stream and interacting with the large number of oxytocin receptors in the peripheral nervous system; if true, this would mean that exogenous oxytocin is not having the effects on the brain researchers have hypothesized. Another more concerning possibility is that many of the studies published on the effects of intranasal oxytocin suffer from methodological problems like questionable statistical approaches to analyzing data. 

Indeed, criticisms of the statistical methods of some of the seminal papers in this field have been made publicly. A recent review also found that studies of intranasal oxytocin often involve small sample sizes; significant findings of small studies are more likely to be statistical aberrations and not representative of true effects. It is also probable that the whole area of research is influenced by publication bias, which is the tendency to publish reports of studies that observe significant results while neglecting to publish reports of studies that fail to see any notable effects. This may seem like a necessary evil, as journal readers are more likely to be interested in learning about new discoveries than experiments that yielded none. Ignoring non-significant findings, though, can lead to the exaggeration of the importance of an observed effect because the available literature may seem to indicate no conflicting evidence (even though such evidence might exist hidden away in the file drawers of researchers throughout the world).

These potential issues are underscored by the inconsistent research results and failed attempts at replicating, or repeating, studies that have reported significant effects of intranasal oxytocin. For example, one of the most influential studies on intranasal oxytocin, which found that oxytocin increases trust, has failed to replicate several times. And in many cases, null research findings have emerged after initial reports indicated a significant effect. The findings of the early autism studies mentioned above, for example, have been contradicted by multiple randomized controlled trials (see here and here) conducted in the last few years, which reported a lack of a significant therapeutic effect.

Not surprisingly, over the years the simple understanding of oxytocin as a neuropeptide that promotes positive emotions and behavior has also become more complicated as it was learned that the effects of oxytocin might not always be so rosy. In one study, for example, researchers observed intranasal oxytocin to be associated with increased envy and gloating. Another study found oxytocin increased ethnocentrism, or the tendency to view one's own ethnicity or culture as superior to others. And in a recent study, intranasal administration of oxytocin increased aggressive behavior. To add even more complexity to the picture, the effects of oxytocin may not be the same in men and women and may even be disparate in different individuals and different environmental contexts.

In an attempt to explain these discordant findings, researchers have proposed new interpretations of oxytocin's role in social behavior. One hypothesis, for example, suggests that oxytocin is involved in promoting responsiveness to any important social cue---whether it be positive (e.g. smiling) or negative (e.g. aggression); this is sometimes called the "social salience" hypothesis. Despite such recent efforts to reconcile the seemingly contradictory findings in oxytocin research, however, there is still not a consensus as to the effects of oxytocin, and the hypothesis that oxytocin is involved in positive social behavior continues to guide the majority of the research in this area.

Thus, for years now oxytocin research has centered on a role for the neuropeptide that is at best sensationalized and at worst deeply flawed. And oxytocin is only the most recent example of this phenomenon. In the 1990s, dopamine earned a reputation as the "pleasure neurotransmitter." Soon after, serotonin became known as the "mood neurotransmitter." These appellations were based on the most compelling discoveries linked to these neurotransmitters: dopamine is involved in processing rewarding stimuli, and serotonin is targeted by treatments for depression. 

However, now that we know more about these substances, it is clear these short definitions of functionality are much too simplistic. Not only are dopamine and serotonin involved in much more than reward and mood, respectively, but also the roles of these two neurotransmitters in reward and mood seem to be very complicated and poorly understood. For example, most researchers no longer think dopamine signaling causes pleasure, but that it is involved with other intricacies of memorable experiences like the identification of important stimuli in the environment---whether they be positive (i.e. rewarding) or negative. Likewise, that serotonin levels alone don't determine mood is now common knowledge in scientific circles (and is finding its way into public perception as well). Thus, these short, easy-to-remember titles are misleading---and somewhat useless.

In assigning one function to one neurotransmitter or neuropeptide, we overlook important facts like the understanding that these neurochemicals often have multiple receptor subtypes they act at, sometimes with drastically different effects. And we neglect to consider that different areas of the brain have different levels of receptors for each neurochemical, and may be preferentially populated with one receptor subtype over another---leading to different patterns of activity in different brain regions with different functional specializations. Add to that all of the downstream effects of receptor activation (which can vary significantly depending on the receptor subtype, area of the brain it is found, etc.), and you have an extremely convoluted picture. Trying to sum it up in one function is ludicrous.

Not only do these simplifying approaches hinder a more complete understanding of the brain, they lead to the wasting of countless hours of research and dollars of research funding pursuing the confirmation of ideas that will likely have to be replaced eventually with something more elaborate. Regardless, this type of simplification in science does seem to serve a purpose. Our brains gravitate towards these straightforward ways of explaining things---possibly because without some comprehensible framework to start from, understanding something as complex as the brain seems like a Herculean task. However, if we are going to utilize this approach, we should at least do it with more awareness of our tendency to do so. By recognizing that, when it comes to the brain, the story we are telling is almost always going to be much more complicated than we are inclined to believe, perhaps we can avoid committing the mistakes of oversimplification we have made in the past.

Psychotherapeutic drugs and the deficits they correct

Before the 1950s, the treatment of psychiatric disorders looked very different from how it does today. As discussed above, unrefined neurosurgery like a transorbital lobotomy was considered a viable approach to treating a variety of ailments ranging from panic disorder to schizophrenia. But lobotomy was only one of a number of potentially dangerous interventions used at the time that generally did little to improve the mental health of most patients. Pharmacological treatments were not much more refined, and often involved the use of agents that simply acted as strong sedatives to make a patient's behavior more manageable. 

The landscape began to change dramatically in the 1950s, however, when a new wave of pharmaceutical drugs became part of psychiatric treatment. The first antipsychotics for treating schizophrenia, the first antidepressants, and the first benzodiazepines to treat anxiety and insomnia were all discovered in this decade. Some refer to the 1950s as the "golden decade" of psychopharmacology, and the decades that followed as the "psychopharmacological revolution," since over this time the discovery and development of psychiatric drugs would progress exponentially; soon pharmacological treatments would be the preferred method of treating psychiatric illnesses.

The success of new psychiatric drugs over the second half of the twentieth century came as something of a surprise because the disorders these drugs were being used to treat were still poorly understood. Thus, drugs were often discovered to be effective through a trial and error process, i.e. test as many substances as we can and eventually maybe we'll find one that treats this condition. Because of how little was understood about the biological causes of these disorders, if a drug with a known mechanism was found to be effective in treating a disorder with an unknown mechanism, often it led to a hypothesis that the disorder must be due to a disruption in the system affected by the drug.

Antidepressants serve as a prime example of this phenomenon. Before the 1950s, a biological understanding of depression was essentially nonexistent. The dominant perspective of the day on depression was psychoanalytic---depression was caused by internal conflicts among warring facets of one's personality, and the conflicts were generally considered to be created by the internalization of troublesome or traumatic experiences that one had gone through earlier in life. The only non-psychoanalytic approaches to treatment involved poorly understood and generally unsuccessful procedures like electroconvulsive shock therapy---which was actually effective in certain cases, but potentially dangerous in others---and treatments like barbiturates or amphetamines, which didn't seem to target anything specific to depression but instead caused widespread sedation or stimulation, respectively.

The first antidepressants were discovered serendipitously. The story of iproniazid, one of the first drugs marketed specifically for depression---the other being imipramine, which underwent discovery and the first clinical uses at around the same time as iproniazid---is a good example of the serendipity involved. In the early 1950s, researchers were working with a chemical called hydrazine and investigating its derivatives for anti-tuberculosis properties (tuberculosis was a scourge at the time and it was routine to test any chemical for its potential to treat the disease). Interestingly, hydrazine derivatives may never have even been tested if the Germans hadn't used hydrazine as rocket fuel during World War II, causing large surpluses of the substance to be found at the end of the war and then sold to pharmaceutical companies on the cheap.

In 1952, a hydrazine derivative called iproniazid was tested on tuberculosis patients in Sea View Hospital on Staten Island, New York. Although the drug did not seem to be superior to other anti-tuberculosis agents in treating tuberculosis, a strange side effect was noted in these preliminary trials: patients who took iproniazid displayed increased energy and significant improvements in mood. One researcher reported that patients were "dancing in the halls 'tho there were holes in their lungs." Although at first largely overlooked as "side effects" of iproniazid treatment, eventually researchers became interested in the mood-enhancing effect of the drug in and of itself; before the end of the decade, the drug was being used to treat patients with depression. 

Around the same time as the discovery of the first antidepressant drugs, a new technique called spectrophotofluorimetry was being developed. This technique allowed researchers to detect changes in the levels of neurotransmitters called monoamines (e.g. dopamine, serotonin, norepinephrine) after the administration of drugs (like iproniazid) to animals. Spectrophotofluorimetry allowed researchers to determine that iproniazid and imipramine were having an effect on monamines. Specifically, the administration of these antidepressants was linked to an increase in serotonin and norepinephrine levels.

This discovery led to the first biological hypothesis of depression, which suggested that depression is caused by deficiencies in levels of serotonin and/or norepinephrine. At first, this hypothesis focused primarily on norepinephrine and was known as the "noradrenergic hypothesis of depression." Later, however---due in part to the putative effectiveness of antidepressant drugs developed to more specifically target the serotonin system---the emphasis would be placed more on serotonin's role in depression, and the "serotonin hypothesis of depression" would become the most widely accepted view of depression.

The serotonin hypothesis would go on to be endorsed not only by the scientific community, but also---thanks in large part to the frequent referral to a serotonergic mechanism in pharmaceutical ads for antidepressants---by the public at large. It would guide drug development and research for years. As the serotonin hypothesis was reaching its heyday, however, researchers were also discovering that it didn't seem to tell the whole story of the etiology of depression.

A number of problems with the serotonin hypothesis were emerging. One was that antidepressant drugs took weeks to produce a therapeutic benefit, but their effects on serotonin levels seemed to occur within hours after administration. This suggested that, at the very least, some mechanism other than increasing serotonin levels was involved in the therapeutic effects of the drugs. Other research that questioned the hypothesis began to accumulate as well. For example, experimentally depleting serotonin in humans was not found to cause depressive symptoms.

There is now a long list of experimental findings that question the serotonin and noradrenergic hypotheses (indeed, the area of research is muddied even further by evidence suggesting antidepressant drugs may not even be all that effective). Clearly, changes in monoamine levels are an effect of most antidepressants, but it does not seem that there is a direct relationship between serotonin or norepinephrine levels and depression. At a minimum, there must be another component to the mechanism.

For example, some have proposed that increases in serotonin levels are associated with the promotion of neurogenesis (the birth of new neurons) in the hippocampus, which is an important brain region for the regulation of the stress response. But recently researchers have also begun to deviate significantly from the serotonin hypothesis, suggesting bases for depression that are different altogether. One more recent hypothesis, for instance, focuses on a role for the glutamate system in the occurrence of depression. 

The serotonin hypothesis of depression is just one of many hypotheses of the biological causes of psychiatric disorders that were formulated based on the assumption that the primary mechanism of a drug that treats the disorder must be correcting the primary dysfunction that causes the disorder. The same logic was used to devise the dopamine hypothesis of schizophrenia. And the low arousal hypothesis of attention-deficit hyperactivity disorder. Both of these hypotheses were at one point the most commonly touted explanations for schizophrenia and ADHD, respectively, but now are generally considered too simplistic (at least in their original formulations).

The logic used to construct such hypotheses is somewhat tautological: drug A increases B and treats disorder C, thus disorder C is caused by a deficiency in B. It neglects to recognize that B may be just one factor influencing some downstream target, D, and thus the effects of the drug may be achieved in various ways, which B is just one of. It fails to appreciate the sheer complexity of the nervous system, and the multitudinous factors that likely are involved in the onset of psychiatric illness. These factors include not just neurotransmitters, but also hormones, genes, gene expression, aspects of the environment, and an extensive list of other possible influences. The complexity of psychiatry likely means there are an almost inconceivable number of ways for a disorder like depression to develop, and our understanding of the main pathways involved is likely still at a very rudimentary level.

Thus, when we simplify such a complex issue to rest primarily upon the levels of one neurotransmitter, we are making a similar type of mistake as discussed in the first section of this article, but perhaps with even greater repercussions. For the errors that result from simplifying psychiatric disorders into "one neurotransmitter" maladies are errors that affect not just progress in neuroscience, but also the mental and physical health of patients suffering from these disorders. Many of these patients are prescribed psychiatric drugs with the assumption their disorder is simple enough to be fixed by adjusting some "chemical imbalance"; perhaps it is not surprising then that psychiatric drugs are ineffective in a surprisingly large number of patients. And many patients continue taking such drugs---sometimes with minimal benefit---despite experiencing significant side effects. Thus, there is even more imperative in this area to move away from searching for simple answers based on known mechanisms and venture out into more intimidating and unknown waters.

Our faith in functional neuroimaging

As approaches to creating images of brain activity like positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) were developed in the second half of the twentieth century, they understandably sparked a great deal of excitement among neuroscientists. These methods allowed neuroscience to achieve something once thought to be impossible---the ability to see what was happening in the brain in (close to) real time. By monitoring cerebral blood flow using a technique like fMRI, one can tell which brain areas are receiving the most blood---and by extension which areas are most active neuronally---when someone is performing some action (e.g. completing a memory task, thinking about a loved one, viewing pictures of rewarding or aversive stimuli, etc.).

This method of neuroimaging, which finally allowed researchers to draw conclusions about elusive connections between structure and function, was dubbed functional neuromaging. Functional neuroimaging methods have predictably become some of the most popular research tools in neuroscience over the last few decades. fMRI surpassed PET as a preferred tool for functional neuroimaging soon after it was developed (due to a variety of factors including better spatial resolution and a less invasive approach), and it has been the investigative method of choice in over 40,000 published studies since the 1990s.

The potential of functional neuroimaging---and fMRI in particular---to unlock countless secrets of the brain intrigued not only investigators but also the popular press. The media quickly realized that the results of fMRI studies could be simplified, combined with some colorful pictures of brain scans, and sold to the public as representative of huge leaps in understanding which parts of the brain are responsible for certain behaviors or patterns of behaviors. The simplification of these studies led to incredible claims that intricate patterns of behavior and emotion like religion or jealousy emanated primarily from one area of the brain. 

Fortunately, this wave of sensationalism has died down a bit as many neuroscientists have been vocal about how this type of oversimplification is taken so far that it propagates untruths about the brain and misrepresents the capabilities of functional neuroimaging. The argument against oversimplifying fMRI results, though, is often an argument against oversimplification itself. The assumption is that the methodology is not flawed, but the interpretation is. More and more researchers, however, are asserting that not only are the reported results of neuroimaging experiments ripe for misinterpretation, but also they are often simply inaccurate.

One major problem with functional neuroimaging involves how the data from these experiments are handled. In fMRI, for example, the device creates a representation of the brain by dividing an image of the brain into thousands of small 3-D cubes called voxels. Each voxel can represent the activity of over a million neurons. Researchers must then analyze the data to determine which voxels are indicative of higher levels of blood flow, and these results are used to determine which areas of the brain are most active. Most of the brain is active at all times, however, so researchers must compare activity in each voxel to activity in that voxel during another task to determine if blood flow in a particular voxel is higher during the task they are interested in.

Due to the sheer volume of data, an issue arises with the task of deciding whether blood flow observed in a particular voxel is representative of activity above a baseline. Each fMRI image can consist of anywhere from 40,000 to 500,000 voxels, depending on the settings of the machine, and each experiment involves many images (sometimes thousands), each taken a couple of seconds apart. This creates a statistical complication called the multiple comparisons problem, which essentially states that if you perform a large number of tests, you are more likely to find one significant result simply by chance than if you performed just one test.

For example, if you flip a coin ten times it would be highly unlikely you would get tails nine times. But, if you flipped 50,000 coins ten times, you would be much more likely to see that result in at least one of the coins. That coin flip result is what, in experimental terms, we would call a false positive. If you're using a typical coin, getting nine tails out of ten flips doesn't tell you something about the inherent qualities of the coin---it's just a statistical aberration that occurred by chance. The same type of thing is more likely to occur when a researcher makes the sometimes millions of comparisons (between active and baseline voxels) involved with an fMRI study. By chance alone, it's likely some of them will appear to indicate a significant level of activity.

fMRi image of dead atlantic salmon. taken from bennett et al. (2009).

fMRi image of dead atlantic salmon. taken from bennett et al. (2009).

This problem was exemplified through an experiment conducted by a group of researchers in 2009 that involved an fMRI scan of a dead Atlantic salmon (yes, the fish). The scientists put the salmon in an fMRI scanner and showed the fish a collection of pictures depicting people engaged in different social situations. They went so far as to ask the salmon---again, a dead fish---what emotion the people in the photographs were experiencing. When the researchers analyzed their data without correcting for the multiple comparisons problem, they observed the miraculous: the dead fish appeared to display brain activity that indicated it was "thinking" about the emotions being portrayed in the photographs. Of course this wasn't what was really going on; instead, it was that the false positives emerging due to the multiple comparisons problem made it appear as if there was real activity occurring in the fish's brain when obviously there was not.

The salmon experiment shows how serious a concern the multiple comparisons problem can be when it comes to analyzing fMRI data. The problem, however, is a well-known issue by now, and most researchers correct for it in some way when statistically analyzing their neuroimaging data. Even today, however, not all do---a 2012 review of 241 fMRI studies found that the authors of 41% of them did not report making any adjustments to account for the multiple comparisons problem. Even when conscious attempts to avoid the multiple comparisons problem are made, though, there is still a question of how effective they are at producing reliable results.

For example, one method for dealing with the multiple comparisons problem that has become popular among fMRI researchers is called clustering. In this approach, only when clusters of contiguous voxels are active together is there enough cause to consider a region of the brain more active than baseline. Part of the rationale here is that if a result is legitimate, it is more likely to involve aggregates of active voxels, and so by focusing on clusters instead of individual voxels one can reduce the likelihood of false positives. 

The problem with clustering is that it doesn't always seem to work that well. For example, a study published this year analyzed fMRI data from close to 500 subjects using three of the most popular fMRI software packages and found that one common approach to clustering still led to a false positive rate of up to 70%. So, even when researchers take pains to account for the multiple comparisons problem, the results often don't seem to inspire confidence that the effect observed is real and not just a result of random fluctuations in brain activity. 

This isn't to say that no fMRI data should be trusted, or that fMRI shouldn't be used to explore brain activity. Rather, it suggests that much more care needs to be taken to ensure fMRI data is managed properly to avoid making erroneous conclusions. Unfortunately, however, the difficulties with fMRI don't begin and end with the multiple comparisons problem. Many fMRI studies also suffer from small sample sizes. This makes it more difficult to detect a true effect, and when some effect is observed it also means it is more likely to be a false positive. Additionally, it means that when a true effect is observed, the size of the effect is more likely to be exaggerated. Some researchers have also argued that neuroimaging research suffers from publication bias, which further inflates the importance of any significant findings because conflicting evidence may not be publicly available.

All in all, this suggests the need for more caution when it comes to conducting and interpreting the results of fMRI experiments. fMRI is an amazing technology that offers great promise in helping us to better understand the nervous system. However, functional neuroimaging is a relatively young field, and we are still learning how to properly use techniques like fMRI. It's to be expected, then---as with any new technology or recently developed field---that there will be a learning curve as we develop an appreciation for the best practices in how to obtain data and interpret results. Thus, while we continue to learn these things, we should use considerable restraint and a critical eye when assessing the results of functional neuroimaging experiments.

*********************************

Progress in neuroscience over the past several centuries has changed our understanding of what it means to be human. Over that time, we learned that the human condition is inextricably connected to this delicate mass of tissue suspended in cerebrospinal fluid in our cranium. We discovered that most afflictions that affect our behavior originate in that tissue, and then we started to figure out ways to manipulate brain activity---through the administration of various substances both natural and man-made---to treat those afflictions. We developed the ability to observe activity in the brain as it occurs, making advances in understanding brain function that humans were once thought to be incapable of. And there are many research tools in neuroscience that are still being refined, but which hold the promise of even greater breakthroughs over the next 50 years.

The mistakes made along the way are to be expected. As a discipline grows, the accumulation of definitive knowledge does not follow a straight trajectory. Rather it involves an accurate insight followed by fumbling around in the dark for some time before making another truthful deduction. Neuroscience is no different. Although we have a tendency to think highly of our current state of knowledge in the field, chances are at any point in time it is still infested with errors. The goal is not to achieve perfection in that sense, but simply to remain cognizant of the impossibility of doing so. By recognizing that we never know as much as we think we know, and by frequently assessing which approaches to understanding are leading us astray, we are more likely to arrive at an approximation of the truth.

 

References (in addition to linked text above):

Finger, S. Origins of Neuroscience. New York, NY: Oxford University Press; 1994.

Lopez-Munoz, F., & Alamo, C. (2009). Monoaminergic Neurotransmission: The History of the Discovery of Antidepressants from 1950s Until Today Current Pharmaceutical Design, 15 (14), 1563-1586 DOI: 10.2174/138161209788168001

Valenstein, ES. Great and Desperate Cures: The Rise and Decline of Psychosurgery and other Radical Treatments for Mental Illness. New York, NY: Basic Books, Inc.; 1986.

If you enjoyed this article, check out:

Neuromyths and the disconnect between science and the public

Limitations of the consensus: How widely-accepted hypotheses can sometimes hinder understanding

Let there be light: how light can affect our mood

If you're looking for an indication of how intricately human physiology is tied to the environment our species evolved in, you need look no further than our circadian clock. For, the internal environment of our body is regulated by 24-hour cycles that closely mirror the time it takes for the earth to rotate once on its axis. Moreover, these cycles are shaped by changes in the external environment (e.g. fluctuating levels of daylight) associated with that rotation. Indeed, this 24-hour cycle regulates everything from sleep to rate of metabolism to hormone release, and it is so refined that it continues even in the absence of environmental cues. In other words, even if you place a person in a room with no windows to see when the sun rises and sets and no clocks to know the time, he will maintain a regular circadian rhythm that approximates 24 hours.

Despite the ability of circadian rhythms to persist in the absence of environmental cues, however, our body clock is very responsive to the presence of light in the external environment. It uses information about illumination levels to synchronize diurnal physiological functions to occur during daylight hours and nocturnal functions to occur during the night. Thus, the presence or absence of light in the environment can indicate whether systems that promote wakefulness or sleep should be activated. In this way, ambient light (or lack thereof) becomes an important signal that can lead to the initiation of an array of biological functions.

It may not be surprising then that abnormalities in environmental illumination (e.g. it is light when the body's clock expects it to be dark) can have a generally disrupting effect on physiological function. Indeed, unexpected changes in light exposure levels have been associated with sleep disturbances, cognitive irregularities, and even mood disorders. Many of these problems are thought to occur due to lack of accord between circadian rhythms and environmental light; however, a role now is also being recognized for the ability of light to affect mood directly, without first influencing circadian rhythms.

Physiology of light detection

For light to be able to influence the 24-hour clock, information about light in the environment must first be communicated to the brain. In non-mammalian vertebrates (e.g. fish, amphibians), there are photoreceptors outside of the eye that can accomplish this task. For example, some animals like lizards have a photoreceptive area below the skin on the top of their heads. This area, sometimes referred to as the third eye, responds to stimulation from light and sends information regarding light in the environment to areas of the brain involved in regulating circadian rhythms.

In humans and other mammals, however, it seems the eyes act as the primary devices for carrying information about light to the brain--even when that information isn't used in the process of image formation. The fact that some blind patients are able to maintain circadian rhythms and display circadian-related physiological changes in response to light stimulation suggests that the retinal mechanism for detecting light for non-image forming functions may involve cells other than the traditional photoreceptors (i.e. rods and cones). While up until about ten years ago it was thought that rods and cones were the only photoreceptive cells in the retina, it is now believed there may be a third class of photoreceptive cell. These cells, called intrinsically photoreceptive retinal ganglion cells (ipRGCs), can respond to light independently of rods and cones. They are thought to have a limited role in conscious sight and image formation, but they may play an important part in transmitting information about environmental light to the brain.

ipRGCs project to various areas of the brain thought to be involved in the coordination of circadian rhythms, but their most important connection is to the suprachiasmatic nuclei (SCN). The SCN are paired structures found in the hypothalamus that each contain only about 10,000 neurons. Although 10,000 neurons is a relatively paltry number compared to other areas of the brain, these combined 20,000 neurons make up what is often referred to as the "master clock" of the body. Through an ingenious mechanism involving cycles of gene transcription and suppression (see here for more about this mechanism), the cells of the SCN independently display circadian patterns of activity, acting as reliable timekeepers for the body. Projections from the SCN to various other brain regions are responsible for coordinating circadian activity throughout the brain.

Although the cells in the SCN are capable of maintaining circadian rhythms on their own, they need information from the external environment to match their oscillatory activity up with the solar day. This is where input from ipRGCs comes in; most of this input is supplied via a pathway that travels directly from the retina to the SCN called the retinohypothalamic tract. This tract uses glutamate signaling to notify the SCN when there is light in the external environment, ensuring SCN activity is in the diurnal phase when there is daylight present.

Thus, there is a complex machinery responsible for maintaining physiological activity on a semblance of a 24-hour schedule and matching that circadian cycle up with what is really going on in the outside world. When the operation of this machinery is disrupted in some way, however, it can contribute to a variety of problems.

Indirect effects of light on mood

The brain has evolved a number of mechanisms that allow circadian rhythms to remain synchronized with the solar day. However, when there are rapid changes in the timing of illumination in the external environment, this can lead to a desynchronization of circadian rhythms. This desynchronization then seems to have a disruptive effect on cognition and mood; thus, these effects are described as indirect effects of light on mood because light must first affect circadian rhythms, which in turn affect mood.

Transmeridian travel and shift work

An example of this type of circadian disruption occurs during rapid transmeridian travel, such as flying from New York to California. Crossing multiple time zones causes the body's clock to become discordant with the solar day; in the case of flying from New York to California the body would expect the sun to go down three hours later than it actually would in the new time zone. This can result in a condition colloquially known as jet lag, but medically referred to by terms that imply circadian disruptions: desynchronosis or circadian dysrhythmia.

Transmeridian travel can lead to a number of both cognitive and physical symptoms. Sleep disturbances afterwards are common, as are other mood disturbances like irritability and fatigue. Physical complaints like headache also frequently occur, and studies have found individuals who undergo transmeridian travel subsequently display decreased physical performance and endurance. Transmeridian travel has even been found to delay ovulation and disrupt the menstrual cycle in women. One study found airline workers who had been exposed to transmeridian travel for four years displayed deficits in cognitive performance, suggesting there may be an accumulative effect of jet lag on cognition.

Similar disruptions in cognition and physiological function can be seen in individuals who are exposed to high levels of nighttime illumination (e.g. those who work a night shift). People who are awake during nighttime hours and attempt to sleep during the day generally experience sleep disturbances that are associated with cognitive deficits and even symptoms of depression. The long-term effects of continued sleep/wake cycle disruption due to shift work involve a variety of negative outcomes, including an increased risk of cancer.

Seasonal affective disorder

In some cases of depression, symptoms begin to appear as the daylight hours become shorter in fall and winter months. The symptoms then often decrease in the spring or summer, and re-occur annually. This type of seasonal oscillation of depressive symptoms is known as seasonal affective disorder (SAD), and circadian rhythms are hypothesized to be at the heart of the affliction. The leading hypotheses regarding the etiology of SAD suggest it is associated with a desynchronization of circadian rhythms caused by seasonal changes in the length of the day.

According to this hypothesis, in patients with SAD circadian rhythms that are influenced by light become delayed when the sun rises later in the winter. However, some cycles (like the sleep-wake cycle) aren't delayed in the same manner, leading to a desynchronization between  biological rhythms and the circadian oscillations of the SCN. One approach to treating SAD that has shown promise has been to expose patients to bright artificial light in the morning. This is meant to mimic the type of morning light exposure patients would receive during the spring and summer, and possibly shift their circadian rhythms (via the retinohypothalamic tract--see above) to regain synchrony. Indeed, studies have found bright light therapy to be just as effective as fluoxetine (Prozac) in treating patients with SAD.

Direct effects of light on mood

In the examples discussed so far, light exposure is hypothesized to lead to changes in mood due to the effects it can have on circadian rhythms. However, it is also becoming recognized that light exposure may be able to directly alter cognition and mood. The mechanisms underlying these effects are still poorly understood, but elucidating them may further aid us in understanding how light may be implicated in mood disorders.

The first studies in this area found that exposure to bright light decreased sleepiness, increased alertness, and improved performance on psychomotor vigilance tasks. More recently, it was observed that exposure to blue wavelength light activated areas of the brain involved in executive functions; another study found that exposure to blue wavelength light increased activity in areas of the brain like the amygdala and hypothalamus during the processing of emotional stimuli.

While it is still unclear what some of these direct effects on brain activity mean in functional terms, awareness of the potential effects of blue wavelength light has led to the investigation of how the use of electronic devices before bed might affect sleep. The results are harrowing for those of us who are prone to use a computer, phone, or e-reader leading up to bedtime: a recent study found reading an e-reader for several hours before bed led to difficulty falling asleep and decreased alertness in the morning as well as caused delays in the timing of the circadian clock.

Thus, it does seem that light is capable of affecting cognition and mood directly, and the effects may be surprisingly extensive. Interestingly, these types of effects have also been observed in studies with blind individuals, suggesting that direct effects of light exposure (like indirect effects) may be triggered by information sent via the non-image forming cells in the retina (e.g. ipRGCs). Despite the fact that this is a pathway by which light can affect mood without first influencing circadian rhythms, however, there is evidence circadian rhythms can still moderate that effect, as the direct effects of light may differ depending on the time of day the exposure occurs.

Light's powerful influence

Research into the effects of light on the brain has identified a potentially important role for light exposure in influencing mood and cognition. However, there is still much to be learned about the ways in which light is capable of exerting these types of effects. Nevertheless, this important area of research has brought to light (no pun intended) a previously unconsidered factor in the etiology of mood disorders. Furthermore, it has begun to raise awareness to the effects light might be having even during seemingly innocuous activities like using electronic devices before bed. When one considers how important a role sunlight has played in the survival of our species, it makes sense that the functioning of our bodies is so closely intertwined with the timing of the solar day. Perhaps what is surprising is that the advent of artificial lighting led us to believe that we could overcome the influence of that relationship. Recent research, however, suggests that our connection to daylight is more powerful than we had imagined.

LeGates, T., Fernandez, D., & Hattar, S. (2014). Light as a central modulator of circadian rhythms, sleep and affect Nature Reviews Neuroscience, 15 (7), 443-454 DOI: 10.1038/nrn3743

The neurobiological underpinnings of suicidal behavior

When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.

The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.

While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.

While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.

Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.

Alterations in neurotransmitter systems

Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.

As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression

Abnormalities in the stress response

Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.

In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.

Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.

One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.

Glial cell abnormalities

While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.

Future directions

Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.

As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.

Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.

Turecki, G. (2014). The molecular bases of the suicidal brain Nature Reviews Neuroscience, 15 (12), 802-816 DOI: 10.1038/nrn3839

Is depression an infectious disease?

Over the past several decades we have seen the advent of a number of new pharmaceutical drugs to treat depression, but major depressive disorder remains one of the most common mood disorders in the United States; over 15% of the population will suffer from depression at some point in their lives. Despite extensive research into the etiology and treatment of depression, we haven't seen a mitigation of the impact it has on our society. In fact, there have even been a lot of questions raised about the general effectiveness of the medications we most frequently prescribe to treat the disorder.

This perceived lack of progress in reducing the burden of depression and the accompanying doubts about the adequacy of our current treatments for it have led some to rethink our approach to understanding the condition. One hypothesis that has emerged from this attempted paradigmatic restructuring suggests that depression is more than just a mood disorder; it may also be a type of infectious disease. According to this perspective, depression may be caused by an infectious pathogen (e.g. virus, bacterium, etc.) that invades the human brain. Although it may sound far-fetched that a microorganism could be responsible for so drastically influencing behavior, it's not without precedent in nature.

Microorganisms and brain function

Perhaps the best-known example of an infectious microorganism influencing brain activity is the effect the parasite Toxoplasma gondii can have on rodent behavior. A protozoan parasite, T. gondii lives and reproduces in the intestines of cats, and infected cats shed T. gondii embryos in their feces. T. gondii thrives in the feline intestinal tract, making that its desired environment. So, after being forced out of their comfy intestinal home, T. gondii embryos utilize what is known as an intermediate host to get back to into their ideal living environment.

Enter rodents, the intermediate hosts, which have a habit of digging through dog and cat feces to find pieces of undigested food to eat. When rodents ingest feces infected with T. gondii, they themselves become infected with the parasite. Through a mechanism that is still not well understood, T. gondii is then thought to be able to manipulate the neurobiology of rodents to reduce their inherent fear of cats and their associated aversion to the smell of cat urine. While most rodents have an innate fear of cat urine, T. gondii-infected rodents seem to be more nonchalant about the odor. This hypothetically makes them less likely to avoid the places their natural predators frequent, and more likely to end up as a feline snack--a snack that puts T. gondii right back into the feline intestinal tract.

This is only one example of microorganisms influencing brain function; there are many others throughout nature. Because some microorganisms appear to be capable of manipulating mammalian nervous systems for their own purposes, it's conceivable that they could do the same to humans. Indeed, studies in humans have found links between depression and infection with several different pathogens.

One example is a virus known as Borna disease virus (BDV). BDV was initially thought to only infect non-human animals, but has more recently been found to infect humans as well. In other animals, BDV can affect the brain, leading to behavioral and cognitive abnormalities along with complications like meningitis and encephalomyelitis. It is unclear whether BDV infection in humans results in clinically-apparent disease, but some contend that it may manifest as psychiatric problems like depression. A meta-analysis of 15 studies of BDV and depression found that people who are depressed are 3.25 times more likely to also to be infected by BDV. Although the relationship is still unclear and more research is needed, this may represent a possible link between infectious microorganisms and depression.

Other infectious agents, such as herpes simplex virus-1 (responsible for cold sores), varicella zoster virus (chickenpox), and Epstein-Barr virus have been found in multiple studies to be more common in depressed patients. There have even been links detected between T. gondii infection and depressed behavior in humans. For example, one study found depressed patients with a history of suicide attempts to have significantly higher levels of antibodies to T. gondii than patients without such a history.

Additionally, a number of studies have found indications of an inflammatory response in the brains of depressed patients. The inflammatory response represents the efforts of the immune system to eliminate an invading pathogen. Thus, markers of inflammation in the brains of depressed patients may indicate the immune system was responding to an infectious microorganism while the patient was also suffering from depressive symptoms--providing at least a correlative link between infection and depression.

Interestingly, a prolonged inflammatory response can promote "sickness behavior," which involves the display of traditional signs of illness like fatigue, loss of appetite, and difficulty concentrating--which are symptoms of depression as well. It is also believed that a prolonged inflammatory response can lead to sickness behavior that then progresses to depression, even in patients with no history of the disorder. Thus, inflammation could serve as indication of an invasion by an infectious pathogen that is capable of bringing about the onset of depression, or it might represent the cause of depression itself.

At this point, these associations between depression and infection are still hypothetical, and we don't know if there is a causal link between any pathogenic infection and depression. If there were, however, imagine how drastically treatment for depression could change. For, if we were able to identify infections that could lead to depression, then we might be able to assess risk and diagnose depression more objectively through methods like measuring antibody levels; we could treat depression the same way we treat infectious diseases: with vaccines, antibiotics, etc. Thus, this hypothesis seems worth investigating not only for its plausibility but also for the number of new viable treatment options that would be available if it were correct.

Canli, T. (2014). Reconceptualizing major depressive disorder as an infectious disease Biology of Mood & Anxiety Disorders, 4 (1) DOI: 10.1186/2045-5380-4-10

If you enjoyed this article, try this one next: Serotonin, depression, neurogenesis, and the beauty of science