Let there be light: how light can affect our mood

If you're looking for an indication of how intricately human physiology is tied to the environment our species evolved in, you need look no further than our circadian clock. For, the internal environment of our body is regulated by 24-hour cycles that closely mirror the time it takes for the earth to rotate once on its axis. Moreover, these cycles are shaped by changes in the external environment (e.g. fluctuating levels of daylight) associated with that rotation. Indeed, this 24-hour cycle regulates everything from sleep to rate of metabolism to hormone release, and it is so refined that it continues even in the absence of environmental cues. In other words, even if you place a person in a room with no windows to see when the sun rises and sets and no clocks to know the time, he will maintain a regular circadian rhythm that approximates 24 hours.

Despite the ability of circadian rhythms to persist in the absence of environmental cues, however, our body clock is very responsive to the presence of light in the external environment. It uses information about illumination levels to synchronize diurnal physiological functions to occur during daylight hours and nocturnal functions to occur during the night. Thus, the presence or absence of light in the environment can indicate whether systems that promote wakefulness or sleep should be activated. In this way, ambient light (or lack thereof) becomes an important signal that can lead to the initiation of an array of biological functions.

It may not be surprising then that abnormalities in environmental illumination (e.g. it is light when the body's clock expects it to be dark) can have a generally disrupting effect on physiological function. Indeed, unexpected changes in light exposure levels have been associated with sleep disturbances, cognitive irregularities, and even mood disorders. Many of these problems are thought to occur due to lack of accord between circadian rhythms and environmental light; however, a role now is also being recognized for the ability of light to affect mood directly, without first influencing circadian rhythms.

Physiology of light detection

For light to be able to influence the 24-hour clock, information about light in the environment must first be communicated to the brain. In non-mammalian vertebrates (e.g. fish, amphibians), there are photoreceptors outside of the eye that can accomplish this task. For example, some animals like lizards have a photoreceptive area below the skin on the top of their heads. This area, sometimes referred to as the third eye, responds to stimulation from light and sends information regarding light in the environment to areas of the brain involved in regulating circadian rhythms.

In humans and other mammals, however, it seems the eyes act as the primary devices for carrying information about light to the brain--even when that information isn't used in the process of image formation. The fact that some blind patients are able to maintain circadian rhythms and display circadian-related physiological changes in response to light stimulation suggests that the retinal mechanism for detecting light for non-image forming functions may involve cells other than the traditional photoreceptors (i.e. rods and cones). While up until about ten years ago it was thought that rods and cones were the only photoreceptive cells in the retina, it is now believed there may be a third class of photoreceptive cell. These cells, called intrinsically photoreceptive retinal ganglion cells (ipRGCs), can respond to light independently of rods and cones. They are thought to have a limited role in conscious sight and image formation, but they may play an important part in transmitting information about environmental light to the brain.

ipRGCs project to various areas of the brain thought to be involved in the coordination of circadian rhythms, but their most important connection is to the suprachiasmatic nuclei (SCN). The SCN are paired structures found in the hypothalamus that each contain only about 10,000 neurons. Although 10,000 neurons is a relatively paltry number compared to other areas of the brain, these combined 20,000 neurons make up what is often referred to as the "master clock" of the body. Through an ingenious mechanism involving cycles of gene transcription and suppression (see here for more about this mechanism), the cells of the SCN independently display circadian patterns of activity, acting as reliable timekeepers for the body. Projections from the SCN to various other brain regions are responsible for coordinating circadian activity throughout the brain.

Although the cells in the SCN are capable of maintaining circadian rhythms on their own, they need information from the external environment to match their oscillatory activity up with the solar day. This is where input from ipRGCs comes in; most of this input is supplied via a pathway that travels directly from the retina to the SCN called the retinohypothalamic tract. This tract uses glutamate signaling to notify the SCN when there is light in the external environment, ensuring SCN activity is in the diurnal phase when there is daylight present.

Thus, there is a complex machinery responsible for maintaining physiological activity on a semblance of a 24-hour schedule and matching that circadian cycle up with what is really going on in the outside world. When the operation of this machinery is disrupted in some way, however, it can contribute to a variety of problems.

Indirect effects of light on mood

The brain has evolved a number of mechanisms that allow circadian rhythms to remain synchronized with the solar day. However, when there are rapid changes in the timing of illumination in the external environment, this can lead to a desynchronization of circadian rhythms. This desynchronization then seems to have a disruptive effect on cognition and mood; thus, these effects are described as indirect effects of light on mood because light must first affect circadian rhythms, which in turn affect mood.

Transmeridian travel and shift work

An example of this type of circadian disruption occurs during rapid transmeridian travel, such as flying from New York to California. Crossing multiple time zones causes the body's clock to become discordant with the solar day; in the case of flying from New York to California the body would expect the sun to go down three hours later than it actually would in the new time zone. This can result in a condition colloquially known as jet lag, but medically referred to by terms that imply circadian disruptions: desynchronosis or circadian dysrhythmia.

Transmeridian travel can lead to a number of both cognitive and physical symptoms. Sleep disturbances afterwards are common, as are other mood disturbances like irritability and fatigue. Physical complaints like headache also frequently occur, and studies have found individuals who undergo transmeridian travel subsequently display decreased physical performance and endurance. Transmeridian travel has even been found to delay ovulation and disrupt the menstrual cycle in women. One study found airline workers who had been exposed to transmeridian travel for four years displayed deficits in cognitive performance, suggesting there may be an accumulative effect of jet lag on cognition.

Similar disruptions in cognition and physiological function can be seen in individuals who are exposed to high levels of nighttime illumination (e.g. those who work a night shift). People who are awake during nighttime hours and attempt to sleep during the day generally experience sleep disturbances that are associated with cognitive deficits and even symptoms of depression. The long-term effects of continued sleep/wake cycle disruption due to shift work involve a variety of negative outcomes, including an increased risk of cancer.

Seasonal affective disorder

In some cases of depression, symptoms begin to appear as the daylight hours become shorter in fall and winter months. The symptoms then often decrease in the spring or summer, and re-occur annually. This type of seasonal oscillation of depressive symptoms is known as seasonal affective disorder (SAD), and circadian rhythms are hypothesized to be at the heart of the affliction. The leading hypotheses regarding the etiology of SAD suggest it is associated with a desynchronization of circadian rhythms caused by seasonal changes in the length of the day.

According to this hypothesis, in patients with SAD circadian rhythms that are influenced by light become delayed when the sun rises later in the winter. However, some cycles (like the sleep-wake cycle) aren't delayed in the same manner, leading to a desynchronization between  biological rhythms and the circadian oscillations of the SCN. One approach to treating SAD that has shown promise has been to expose patients to bright artificial light in the morning. This is meant to mimic the type of morning light exposure patients would receive during the spring and summer, and possibly shift their circadian rhythms (via the retinohypothalamic tract--see above) to regain synchrony. Indeed, studies have found bright light therapy to be just as effective as fluoxetine (Prozac) in treating patients with SAD.

Direct effects of light on mood

In the examples discussed so far, light exposure is hypothesized to lead to changes in mood due to the effects it can have on circadian rhythms. However, it is also becoming recognized that light exposure may be able to directly alter cognition and mood. The mechanisms underlying these effects are still poorly understood, but elucidating them may further aid us in understanding how light may be implicated in mood disorders.

The first studies in this area found that exposure to bright light decreased sleepiness, increased alertness, and improved performance on psychomotor vigilance tasks. More recently, it was observed that exposure to blue wavelength light activated areas of the brain involved in executive functions; another study found that exposure to blue wavelength light increased activity in areas of the brain like the amygdala and hypothalamus during the processing of emotional stimuli.

While it is still unclear what some of these direct effects on brain activity mean in functional terms, awareness of the potential effects of blue wavelength light has led to the investigation of how the use of electronic devices before bed might affect sleep. The results are harrowing for those of us who are prone to use a computer, phone, or e-reader leading up to bedtime: a recent study found reading an e-reader for several hours before bed led to difficulty falling asleep and decreased alertness in the morning as well as caused delays in the timing of the circadian clock.

Thus, it does seem that light is capable of affecting cognition and mood directly, and the effects may be surprisingly extensive. Interestingly, these types of effects have also been observed in studies with blind individuals, suggesting that direct effects of light exposure (like indirect effects) may be triggered by information sent via the non-image forming cells in the retina (e.g. ipRGCs). Despite the fact that this is a pathway by which light can affect mood without first influencing circadian rhythms, however, there is evidence circadian rhythms can still moderate that effect, as the direct effects of light may differ depending on the time of day the exposure occurs.

Light's powerful influence

Research into the effects of light on the brain has identified a potentially important role for light exposure in influencing mood and cognition. However, there is still much to be learned about the ways in which light is capable of exerting these types of effects. Nevertheless, this important area of research has brought to light (no pun intended) a previously unconsidered factor in the etiology of mood disorders. Furthermore, it has begun to raise awareness to the effects light might be having even during seemingly innocuous activities like using electronic devices before bed. When one considers how important a role sunlight has played in the survival of our species, it makes sense that the functioning of our bodies is so closely intertwined with the timing of the solar day. Perhaps what is surprising is that the advent of artificial lighting led us to believe that we could overcome the influence of that relationship. Recent research, however, suggests that our connection to daylight is more powerful than we had imagined.

LeGates, T., Fernandez, D., & Hattar, S. (2014). Light as a central modulator of circadian rhythms, sleep and affect Nature Reviews Neuroscience, 15 (7), 443-454 DOI: 10.1038/nrn3743

The neurobiological underpinnings of suicidal behavior

When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.

The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.

While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.

While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.

Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.

Alterations in neurotransmitter systems

Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.

As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression

Abnormalities in the stress response

Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.

In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.

Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.

One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.

Glial cell abnormalities

While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.

Future directions

Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.

As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.

Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.

Turecki, G. (2014). The molecular bases of the suicidal brain Nature Reviews Neuroscience, 15 (12), 802-816 DOI: 10.1038/nrn3839

Is depression an infectious disease?

Over the past several decades we have seen the advent of a number of new pharmaceutical drugs to treat depression, but major depressive disorder remains one of the most common mood disorders in the United States; over 15% of the population will suffer from depression at some point in their lives. Despite extensive research into the etiology and treatment of depression, we haven't seen a mitigation of the impact it has on our society. In fact, there have even been a lot of questions raised about the general effectiveness of the medications we most frequently prescribe to treat the disorder.

This perceived lack of progress in reducing the burden of depression and the accompanying doubts about the adequacy of our current treatments for it have led some to rethink our approach to understanding the condition. One hypothesis that has emerged from this attempted paradigmatic restructuring suggests that depression is more than just a mood disorder; it may also be a type of infectious disease. According to this perspective, depression may be caused by an infectious pathogen (e.g. virus, bacterium, etc.) that invades the human brain. Although it may sound far-fetched that a microorganism could be responsible for so drastically influencing behavior, it's not without precedent in nature.

Microorganisms and brain function

Perhaps the best-known example of an infectious microorganism influencing brain activity is the effect the parasite Toxoplasma gondii can have on rodent behavior. A protozoan parasite, T. gondii lives and reproduces in the intestines of cats, and infected cats shed T. gondii embryos in their feces. T. gondii thrives in the feline intestinal tract, making that its desired environment. So, after being forced out of their comfy intestinal home, T. gondii embryos utilize what is known as an intermediate host to get back to into their ideal living environment.

Enter rodents, the intermediate hosts, which have a habit of digging through dog and cat feces to find pieces of undigested food to eat. When rodents ingest feces infected with T. gondii, they themselves become infected with the parasite. Through a mechanism that is still not well understood, T. gondii is then thought to be able to manipulate the neurobiology of rodents to reduce their inherent fear of cats and their associated aversion to the smell of cat urine. While most rodents have an innate fear of cat urine, T. gondii-infected rodents seem to be more nonchalant about the odor. This hypothetically makes them less likely to avoid the places their natural predators frequent, and more likely to end up as a feline snack--a snack that puts T. gondii right back into the feline intestinal tract.

This is only one example of microorganisms influencing brain function; there are many others throughout nature. Because some microorganisms appear to be capable of manipulating mammalian nervous systems for their own purposes, it's conceivable that they could do the same to humans. Indeed, studies in humans have found links between depression and infection with several different pathogens.

One example is a virus known as Borna disease virus (BDV). BDV was initially thought to only infect non-human animals, but has more recently been found to infect humans as well. In other animals, BDV can affect the brain, leading to behavioral and cognitive abnormalities along with complications like meningitis and encephalomyelitis. It is unclear whether BDV infection in humans results in clinically-apparent disease, but some contend that it may manifest as psychiatric problems like depression. A meta-analysis of 15 studies of BDV and depression found that people who are depressed are 3.25 times more likely to also to be infected by BDV. Although the relationship is still unclear and more research is needed, this may represent a possible link between infectious microorganisms and depression.

Other infectious agents, such as herpes simplex virus-1 (responsible for cold sores), varicella zoster virus (chickenpox), and Epstein-Barr virus have been found in multiple studies to be more common in depressed patients. There have even been links detected between T. gondii infection and depressed behavior in humans. For example, one study found depressed patients with a history of suicide attempts to have significantly higher levels of antibodies to T. gondii than patients without such a history.

Additionally, a number of studies have found indications of an inflammatory response in the brains of depressed patients. The inflammatory response represents the efforts of the immune system to eliminate an invading pathogen. Thus, markers of inflammation in the brains of depressed patients may indicate the immune system was responding to an infectious microorganism while the patient was also suffering from depressive symptoms--providing at least a correlative link between infection and depression.

Interestingly, a prolonged inflammatory response can promote "sickness behavior," which involves the display of traditional signs of illness like fatigue, loss of appetite, and difficulty concentrating--which are symptoms of depression as well. It is also believed that a prolonged inflammatory response can lead to sickness behavior that then progresses to depression, even in patients with no history of the disorder. Thus, inflammation could serve as indication of an invasion by an infectious pathogen that is capable of bringing about the onset of depression, or it might represent the cause of depression itself.

At this point, these associations between depression and infection are still hypothetical, and we don't know if there is a causal link between any pathogenic infection and depression. If there were, however, imagine how drastically treatment for depression could change. For, if we were able to identify infections that could lead to depression, then we might be able to assess risk and diagnose depression more objectively through methods like measuring antibody levels; we could treat depression the same way we treat infectious diseases: with vaccines, antibiotics, etc. Thus, this hypothesis seems worth investigating not only for its plausibility but also for the number of new viable treatment options that would be available if it were correct.

Canli, T. (2014). Reconceptualizing major depressive disorder as an infectious disease Biology of Mood & Anxiety Disorders, 4 (1) DOI: 10.1186/2045-5380-4-10

If you enjoyed this article, try this one next: Serotonin, depression, neurogenesis, and the beauty of science

 

 

Serotonin, depression, neurogenesis, and the beauty of science

If you asked any self-respecting neuroscientist 25 years ago what causes depression, she would likely have only briefly considered the question before responding that depression is caused by a monoamine deficiency. Specifically, she might have added, in many cases it seems to be caused by low levels of serotonin in the brain. The monoamine hypothesis that she would have been referring to was first formulated in the late 1960s, and at that time was centered primarily around norepinephrine. But in the decades following the birth of the monoamine hypothesis, its focus shifted to serotonin, in part due to the putative success of antidepressant drugs that targeted the serotonin transporter (e.g. selective serotonin reuptake inhibitors, or SSRIs). The monoamine/serotonin hypothesis eventually became generally recognized as viable by the scientific community. Interestingly, it also became widely accepted by the public, who were regularly exposed to television commercials for antidepressant drugs like Prozac, Lexapro, and Celexa--drugs whose commercials specifically mentioned a serotonin imbalance as playing a role in depression.

Over the years, however, the scientific method quietly and efficiently went to work. Evidence gradually accumulated that indicated that the serotonin hypothesis does a very inadequate job of explaining depression. For example, although SSRIs increase serotonin levels within hours after drug administration, if their administration leads to beneficial effects--a big if--it usually takes 2-4 weeks of daily administration for those effects to appear. One would assume that if serotonin levels were causally linked to depression, then soon after serotonin levels increased, mood would begin to improve. Also, reducing levels of serotonin in the brain does not cause depression. The list of studies that don't fully support the serotonin hypothesis of depression is actually quite lengthy, and most of the scientific community now agrees that the hypothesis is insufficient as a standalone explanation of depression.

In the 1990s another hypothesis, known as the neurogenic hypothesis, was proposed with the hopes of filling in some of the holes in the etiology of depression that the monoamine hypothesis seemed to be unable to fill. The neurogenic hypothesis suggests that depression is at least partially caused by an impairment of the brain's ability to produce new neurons, a process known as neurogenesis. Specifically, researchers have focused on neurogenesis in the hippocampus, one of the only areas in the brain where neurogenesis has been observed in adulthood (the other being the subventricular zone).

The neurogenic hypothesis was formulated based on several observations. First, depressed patients seem to have smaller hippocampi than the general population, and their hippocampi also appear to be smaller during periods of depression than during periods of remission. Second, glucocorticoids like cortisol are elevated in depression, and glucocorticoids appear to inhibit neurogenesis in the hippocampus in rodents and non-human primates. Finally, there is evidence that the chronic administration of antidepressants increases neurogenesis in the hippocampus in rodents.

The neurogenic hypothesis thus suggests that depression is associated with a reduction in the birth of new neurons in the hippocampus, an area of the brain important to stress regulation, cognition, and mood. According to this hypothesis, when someone takes antidepressants, the drugs do raise levels of monoamines like serotonin, but they also enact long-term processes that increase neurogenesis in the hippocampus. This neurogenesis is hypothesized to be a crucial part of the reason antidepressants work, and the fact that it takes some time for hippocampal neurogenesis to return to normal may help to explain why antidepressants take several weeks to have an effect.

This may all sound logical, but the neurogenic hypothesis has its own share of problems. For example, while stress-related impairment of neurogenesis has been observed in rodents, we don't have definitive evidence it occurs in humans. Human studies thus far have relied on comparing the size of the hippocampi in depressed and non-depressed patients. While smaller hippocampi have been observed in depressed individuals, it is not clear that this is due to reduced neurogenesis rather than some other type of structural changes that might have occurred during depression.

Similarly, while the administration of antidepressants has been associated with increased neurogenesis in rodent models of stress, we don't have clear evidence of this in humans. In humans we again have to rely on looking at things like hippocampal size. Because there could be a number of explanations for changes in the size of the hippocampi, we can't assume neurogenesis is the sole factor involved--or that it is involved at all. Additionally, some studies in rodents have found that antidepressants lead to a reduction in anxiety or depressive symptoms in the absence of increased hippocampal neurogenesis.

Another problem is that when neurogenesis is experimentally decreased in rodents, the animals don't usually display depressive symptoms. Experiments of this type haven't been performed with humans or non-human primates, so we don't know if a reduction in neurogenesis in any species is actually sufficient to cause depression. And no studies have found that increasing neurogenesis alone is enough to alleviate depressive-like symptoms.

Of course none of this means the neurogenic hypothesis is incorrect, but it does suggest there is a long way to go before we can feel confident about incorporating it fully into our understanding of depression. In the reluctance of the scientific community to embrace this hypothesis is where I see the beauty of science. Although it took decades of testing and revising before the monoamine hypothesis became a widely accepted explanation for depression, one could argue (based on its now recognized shortcomings) that we accepted it too readily.

However, it seems that many in the scientific community have learned from that mistake. Although there is no shortage of publications whose authors may be too willing to anoint the neurogenic hypothesis as a new unifying theory of depression, overall the tone when speaking of the neurogenic hypothesis seems to be cautious and/or critical. There is also a great deal of discussion now in the literature about the complexity of mood disorders like depression, and how it is unlikely to be able to explain their manifestation in a diverse population of individuals with just one mechanism, whether it be impaired neurogenesis or a serotonin deficiency.

Thus, the neurogenic hypothesis will require much more testing before we can consider it an important piece in the puzzle of depression. Even if further testing supports it, however, it will likely be considered just that--a piece in the puzzle, instead of an overarching explanation of the disorder. And that circumspect approach to explaining depression represents an important advancement in the way we look at psychiatric disorders.

See also: http://www.neuroscientificallychallenged.com/blog/2008/04/serotonin-hypothesis-and-neurogenesis

Miller, B., & Hen, R. (2015). The current state of the neurogenic theory of depression and anxiety Current Opinion in Neurobiology, 30, 51-58 DOI: 10.1016/j.conb.2014.08.012

Autism, SSRIs, and Epidemiology 101

I can understand the eagerness with which science writers jump on stories that deal with new findings about autism spectrum disorders (ASDs). After all, the mystery surrounding the rapid increase in ASD rates over the past 20 years has made any ASD-related study that may offer some clues inherently interesting. Because people are anxiously awaiting some explanation of this medical enigma, it seems like science writers almost have an obligation to discuss new findings concerning the causes of ASD.

The problem with many epidemiological studies involving ASD, however, is that we are still grasping at straws. There seem to be some environmental influences on ASD, but the nature of those influences is, at this point, very unclear. This lack of clarity means that the study of nearly any environmental risk factor starts out having potential legitimacy. And I don't mean that as a criticism of these studies--it's just where we're at in our understanding of the rise in ASD rates. After we account for mundane factors like increases in diagnosis due simply to greater awareness of the disorder, there's a lot left to figure out.

So, with all this in mind, it's understandable (at least in theory) to me why a study published last week in the journal Pediatrics became international news. The study looked at a sample of children that included healthy individuals along with those who had been diagnosed with ASD or another disorder involving delayed development. They asked the mothers of these children about their use of selective serotonin reuptake inhibitors (SSRIs) during pregnancy. 1 in 10 Americans is currently taking an antidepressant, and SSRIs are the most-frequently prescribed type of antidepressant. Thus, SSRIs are administered daily by a significant portion of the population.

Before I tell you what the results of the study were, let me tell you why we should be somewhat cautious in interpreting them. This study is what is known as a case-control study. In a case-control study, investigators identify a group of individuals with a disorder (the cases) and a group of individuals without the disorder (the controls). Then, the researchers employ some method (e.g. interviews, examination of medical records) to find out if the cases and controls were exposed to some potential risk factor in the past. They compare rates of exposure between the two groups and, if more cases than controls had exposure to the risk factor, it allows the researchers to make an argument for this factor as something that may increase the risk of developing the disease/disorder.

If you take any introductory epidemiology (i.e. the study of disease) course, however, you will learn that a case-control study is fraught with limitations. For, even if you find that a particular exposure is frequently associated with a particular disease, you still have no way of knowing if the exposure is causing the disease or if some other factor is really the culprit. For example, in a study done at the University of Pennsylvania in the late 1990s, researchers found that children who slept with nightlights on had a greater risk of nearsightedness when they got older. This case-control study garnered a lot of public attention as parents began to worry that they might be ruining their kids' eyesight by allowing them to use a nightlight. Subsequent studies, however, found that children inherit alleles for nearsightedness from their parents. Nearsighted parents were coincidentally more likely to use nightlights in their children's rooms (probably because it made it easier for the nearsighted parents to see).

A variable that isn't part of the researcher's hypothesis, but still influences a study's results is known as a confounding variable. In the case of the nearsightedness study, the confounding variable was genetics. Case-control studies are done after the fact, and thus experimenters have little control over other influences that may have affected the development of disease. Thus, there are often many confounding influences on relationships detected in case-control studies.

So, a case-control study can't be used to confirm a cause-and-effect connection between an exposure and a disorder or disease. What it can do is provide leads that scientists can then follow up on using a more rigorous experimental design (like a cohort study or randomized trial). Indeed, the scientific literature is replete with case-control studies that ended up being false leads. Sometimes, however, case-control results have been replicated with better designs, leading to important discoveries. This is exactly what happened with early reports examining smoking and lung cancer.

Back to the recent study conducted by Harrington et al. The authors found that SSRI use during the first trimester was more common in mothers whose children went on to develop ASD than in mothers who had children who developed normally. The result was only barely statistically significant. This fact combined with the variability seen in the confidence interval suggests it is not an overly-convincing finding--but it was a finding nonetheless. In addition to an increased risk of ASD, the authors also point out that SSRI exposure during the second and third trimesters was higher among mothers of boys with other developmental delays. Again, however, the effect was just barely statistically significant and even less convincing than the result concerning ASD.

So, the study ended up with some significant results that aren't all that impressive. Regardless, because this was a case-control design, there is little we can conclude from the study. To realize why, think about what other factors women who take SSRIs might have in common. Perhaps one of those influences, and not the SSRI use itself, is what led to an increased risk of ASD. For example, it seems plausible that the factors that make a mother more susceptible to a psychiatric disorder might also play a role in making her child more susceptible to a neurodevelopmental disorder. In fact, a cohort study published last year with a much larger sample size found that, when the influence of the condition women were taking SSRIs for was controlled for, there was no significant association between SSRI use during pregnancy and ASD.

The fact that this case-control study doesn't solve the mystery of ASD isn't a knock on the study itself. If anything, it's a knock on some of the science writing done in response to the study. I can't go so far as to say these types of studies shouldn't be reported on. But, they should be reported on responsibly, and by writers who fully understand and acknowledge their shortcomings. For, it is somewhat misleading to the general public (who likely isn't aware of the limitations of a case-control study) when headlines like this appear: "Study: Moms on antidepressants risk having autistic baby boys."

The safety of SSRI use during pregnancy is still very unclear. But both SSRIs and untreated depression during pregnancy have been linked to negative health outcomes for a child. Thus, using SSRIs during pregnancy is something a woman should discuss at length with her doctor to determine if treatment of the underlying condition poses more of a risk than leaving the condition untreated. In making that decision, however, the barely significant findings from a case-control study should not really be taken into consideration.

 

Rebecca A. Harrington, Li-Ching Lee, Rosa M. Crum, Andrew W. Zimmerman, Irva Hertz-Picciotto (2014). Prenatal SSRI Use and Offspring With Autism Spectrum Disorder or Developmental Delay PEDIATRICS DOI: 10.1542/peds.2013-3406d