2-Minute Neuroscience: Thalamus

In this video, I cover the thalamus. I discuss the function of the thalamus as a relay station for information that is traveling to the cerebral cortex. I also cover some of the major nuclei of the thalamus, including the ventral posterolateral nucleus (VPL), ventral posteromedial nucleus (VPM), pulvinar nucleus, medial geniculate nucleus, lateral geniculate nucleus, and reticular nucleus.

Know your brain: Orbitofrontal cortex

Orbitofrontal cortex (in green)

Orbitofrontal cortex (in green)

Where is the orbitofrontal cortex?

The orbitofrontal cortex is the area of the prefrontal cortex that sits just above the orbits (also known as the eye sockets). It is thus found at the very front of the brain, and has extensive connections with sensory areas as well as limbic system structures involved in emotion and memory.

What is the orbitofrontal cortex and what does it do?

The orbitofrontal cortex (OFC) is a poorly understood area of the brain, but also one that inspires a great deal of interest for some of the roles it is hypothesized to play in higher-order cognition like decision-making. Indeed, the prefrontal cortex and frontal lobes in general are considered essential for rational thought, reasoning, and even the full expression of personality. Thus, much of the research into the OFC has focused on functions that seem especially important to the thought processes that separate humans from other species with "lesser" cognitive abilities. We know very little for sure about the OFC, however, and the degree to which the functions ascribed to it below are truly regulated by the OFC is still being debated.

Some of the functions commonly associated with the OFC--and indeed with prefrontal areas as a whole--involve impulse control and response inhibition. This hypothesized role for the frontal areas of the brain can be traced back to Phineas Gage, who allegedly experienced a drastic reduction in social inhibition after a railroad accident that caused extensive damage to his prefrontal and orbitofrontal cortices. Deficits in response inhibition have also been seen in human patients and non-human primates that have damage to the OFC. For example, in one experiment patients with OFC damage were rewarded by touching a particular image when it appeared on the screen of a video monitor, but taught to avoid touching a different image. Although they were able to learn to avoid one image initially, when the researchers reversed the value of the images (such that the once-rewarding image was now the one to avoid), participants with OFC damage had difficulty inhibiting their impulse to touch the previously-rewarding image. This may suggest that the OFC is concerned with a type of impulse control; however, others have argued that the OFC might only be involved in inhibiting impulses after this specific type of reversal procedure, as studies that have attempted to show a general deficit in response inhibition after OFC damage have not been consistent.

The OFC is also frequently associated with certain types of decision-making. For example, it has been hypothesized that the OFC is important in decisions that must be made by comparing the relative value among several options to decide which is preferable. Patients with damage to the OFC have been found to display deficits in gambling tasks that require them to consider the usefulness of different gambling strategies to maximize the potential of earning fake money. A study in monkeys also found that neuronal firing patterns in the OFC changed depending on the value of a juice reward the monkeys were offered (e.g. some neurons were activated more in response to being offered a rewarding Kool-Aid drink than in response to water). Later research has suggested the OFC may have a more specific role than just value-based decision-making; it has been hypothesized the OFC may be necessary for making predictions about decisions based on newly-learned information. In other words, the OFC might not be needed to form a simple association between a stimulus and a reward, but may be necessary if something changes about the stimulus and an estimate must be made about its potential to still provide a reward.

The OFC has also been implicated in playing a significant role in emotion. As mentioned above, the OFC is interconnected with limbic system structures like the amygdala, which are considered to be important to the experience of emotion. It has been hypothesized that the OFC is involved specifically in modulating bodily changes that are associated with emotion (e.g. a nervous feeling in the stomach and increased perspiration linked to anxiety). This hypothesis has been supported by experiments with patients who have OFC damage; in gambling tasks they tend to make risky choices and yet display no signs of anxiety as measured, for example, by skin conductance (which gauges perspiration). Healthy controls make fewer risky choices and when faced with the option of making a risky choice they display increased skin conductance, which suggests they are perspiring slightly. The patients with OFC damage can identify which choices are the most risky, yet they continue to make them. Researchers have suggested this is because the OFC is involved in providing an important bodily signal that helps individuals identify a poor choice by initiating an emotional response to it. Others have argued, however, that the deficits observed in gambling tasks with OFC-damaged patients represent a deficit that may not necessarily be emotionally-based or involve only emotionally-guided decisions. Indeed, patients with OFC damage don't seem to display a global emotional deficit, which might be expected if the OFC played a crucial role in regulating emotions.

Thus, although the functions discussed above have all been associated with the OFC, there is still a great deal of uncertainty as to the extent they are controlled by the OFC; at this point there is no consensus on what the OFC does and does not do. It seems safe to say that the OFC plays an important role in cognition, but it will take further research to determine what exactly that role is.

Stalnaker, T., Cooch, N., & Schoenbaum, G. (2015). What the orbitofrontal cortex does not do Nature Neuroscience, 18 (5), 620-627 DOI: 10.1038/nn.3982

Limitations of the consensus: How widely-accepted hypotheses can sometimes hinder understanding

To those of us who believe strongly in the scientific method, it really is the only approach to understanding the relationship between two events or variables that allows us to make assertions about such relationships with any confidence. Due to the inherent flaws in human reasoning, our non-scientific conclusions are frequently riddled with bias, misunderstanding, and misattribution. Thus, it seems there is little that can be trusted if it hasn't been scientifically verified.

The scientific method, however, is a human creation as well, and therefore it is less than perfect; at the very least our use of it is flawed due to the fact that human foibles inevitably creep into the process of innovation. One example of this can be seen when we mix our enticement to make scientific discoveries with financial incentives. When this occurs it makes the likelihood of malfeasance much higher, and causes tainted results to be a more distinct possibility. But there are many other examples of flaws in the scientific process that are a bit more subtle--and certainly less heinous--than fiscally-influenced dishonesty. Another pitfall, for instance, stems from the natural human bias toward favoring the familiar, understandable, and available ways of explaining a phenomenon over those that are less so.

This cognitive bias is generally most detrimental to the understanding of phenomena that are, as of yet, not well understood. For, when we gain some insight that we believe helps us to illuminate the mechanism underlying a poorly-understood phenomenon, we tend to build off of that insight. It serves as the foundation for many closely-related hypotheses and experiments to test those hypotheses. If these experiments are generally supportive of the new perspective, then our understanding of the phenomenon begins to form around that viewpoint. The new perspective then becomes the widely-accepted way of thinking about the phenomenon.

This may be fine if the newly-devised hypothesis ends up being indisputably accurate, as there is nothing wrong with building off of previous ideas to further overall understanding--indeed, this is the way the growth of human knowledge generally works. The problem occurs, however, when a sense of unanimity develops around the new hypothesis. This widespread agreement then can tend to limit the creative exploration of other mechanisms: it may make us quicker to disregard a competing hypothesis, less capable of obtaining grant funding to explore one, and sometimes less likely to pay attention to shortcomings in the consensus hypothesis itself. Thus, if a consensus hypothesis does not tell the full story (which often they do not), its acceptance may actually hinder scientific progress.

The dopamine hypothesis of schizophrenia

The dopamine hypothesis of schizophrenia is arguably a good example of this conundrum. Schizophrenia is a complex disorder that affects over 21 million people worldwide, but it can manifest in a drastically different way from patient to patient. It is characterized by a diverse group of symptoms that generally involve some detachment from reality, disordered thought processes, and/or impaired social interaction or withdrawal. The symptoms are commonly grouped as negative symptoms, which involve the absence of typical behaviors (e.g. limited speech, lack of affect), and positive symptoms, which involve the presence of unusual behaviors (e.g. hallucinations, delusions). Adding to the byzantine nature of the disorder, there are at least five subtypes of schizophrenia that distinguish schizophrenics by general trends in symptomatology. For example, paranoid schizophrenics often experience delusions of persecution along with hallucinations, while catatonic schizophrenics display a predominance of negative symptoms that include lack of movement, motivation, and emotion. Generally, the first clear symptoms of schizophrenia emerge in adolescence or adulthood in the form of an initial break with reality called a psychotic episode. The course of the disorder, however, is variable; some patients experience recurring psychotic episodes followed by remission of symptoms, while others suffer from constant symptoms that severely impact cognition and functioning on a daily basis.

The dopamine hypothesis of schizophrenia was formulated in the 1960s when it was discovered that drugs that can be used to treat schizophrenia also act as dopamine receptor antagonists. Because these drugs--which as a class are referred to as antipsychotics or neuroleptics--help to alleviate the positive symptoms of schizophrenia, it was assumed that the mechanism underlying schizophrenia must involve an increased level of dopamine neurotransmission. In other words, if antipsychotic drugs block dopamine activity and improve schizophrenic symptoms, then those symptoms must be caused by too much dopamine activity. Dozens of dopamine antagonist antipsychotic drugs have been developed since the 1960s with this reasoning in mind.

Further support for the dopamine hypothesis of schizophrenia was inferred from a phenomenon known as stimulant-induced psychosis. In some individuals who don’t have a history of psychosis, when high doses of stimulant drugs like amphetamine are taken (or even when normal doses are administered over long periods of time), the effects of the drugs appear similar in some ways to a psychotic episode. Because the drugs that can have this effect include as one of their primary mechanisms of action the increasing of dopamine levels, researchers suggested this strengthened the hypothesis that high dopamine levels are associated with schizophrenia. That is, if increasing dopamine levels through drug use can cause something that resembles a psychotic episode then it is likely psychotic episodes that occur naturally are also due to high dopamine levels.

Bolstered by these pieces of evidence, the dopamine hypothesis has guided schizophrenia research and drug development until the present day--and continues to do so. Over that time, additional research that supports the hypothesis has accumulated. For example, when amphetamine is administered to schizophrenic patients, the patients display greater increases in dopamine levels in response to the drug than healthy controls do. This supports the hypothesis that dopamine neurotransmission is dysregulated in schizophrenic patients, and suggests it may be elevated at baseline. Schizophrenic patients have also been found to have altered presynaptic dopamine function, including an increased capacity for dopamine synthesis and increased dopamine release from presynaptic neurons. The number of studies that have supported the hypothesis that dopamine signaling in schizophrenic patients is abnormal is actually quite extensive.

Regardless, serious questions about the role of dopamine in schizophrenia remain. Some still argue it isn't clear that there are dopaminergic abnormalities in the brains of schizophrenic patients, but even if we accept the premise that there are it remains undetermined if these differences in dopamine activity are the primary cause of the symptoms of schizophrenia. Despite the fact that antipsychotic drugs reduce activity at dopamine receptors, approximately 1/3 of schizophrenic patients don't respond to most antipsychotics. This suggests that some other mechanism must be at play in at least a significant minority of patients. Also, dopaminergic abnormalities alone don't seem to explain the negative symptoms of schizophrenia, as these are more resistant to the therapeutic effects of antipsychotic drugs than positive symptoms are. Indeed, some evidence even suggests negative symptoms may be improved by increasing dopamine levels. Furthermore, antipsychotics vary significantly in their affinity for the dopamine receptor, and that affinity doesn't always predict the clinical effectiveness of the drugs. Some antipsychotics have an affinity for serotonin receptors as well, and in some cases serotonin receptor affinity can predict clinical effectiveness, which suggests a role for the serotonin system in the mechanism underlying schizophrenia. And direct evidence for the hypothesis is lacking; when brains have been studied post-mortem or cerebrospinal fluid has been sampled to test for excessive dopamine activity, the results have been inconsistent.

These shortcomings have prompted a number of revisions of the dopamine hypothesis. For example, it has now been suggested that schizophrenia is characterized by both excessive dopamine transmission and low dopamine activity in different areas of the brain. According to this perspective, dopamine underactivity is associated with negative symptoms and overactivity with positive symptoms. Some researchers, however, have begun to consider other neurotransmitter systems as playing a central role in schizophrenia. One hypothesis that is gaining considerable support suggests that there is reduced glutamate activity in the brains of schizophrenics. This glutamate hypothesis doesn't argue that dopamine activity is normal in schizophrenia, but rather that glutamate dysfunction may play an equally important role.

Thus, after 40 years or so of the dopamine hypothesis guiding schizophrenia research, more and more investigators are now considering it to be unlikely dopamine dysfunction fully explains schizophrenia. Of course, it would make sense that such a complex disorder cannot be explained by fluctuations in the levels of just one neurotransmitter. In fact, if the dopamine hypothesis had been devised today, it may have had a more difficult time gaining such widespread support, as the field of neuroscience is now much more wary than it was a few decades ago of explanations for complex disorders that focus primarily on one neurotransmitter (or one gene, brain region, etc.); we have learned from experience that these explanations often end up being gross oversimplifications.

But has the dopamine hypothesis hindered our understanding of schizophrenia? It's really impossible to know for sure. When one hypothesis dominates an area of research for a prolonged period of time, however, it usually does have a significant influence on that research. Widespread acceptance of the hypothesis can create a situation where challenges to it become less frequent, and competing hypotheses are often not given as much attention. Researchers may even find it easier to get grants funded when those grants involve an investigation of some aspect of the well-known hypothesis than when they involve venturing out into less familiar territory. This can have a restrictive effect on research, causing it to be more difficult to explore alternative hypotheses and thus actually making the consensus hypothesis more popular by default (because there are no viable alternatives). Thus, perhaps if the dopamine hypothesis hadn't had such extensive support, we would have seen competing perspectives like the glutamate hypothesis garner attention earlier on.

Additionally, when there is a consensus hypothesis, investigators may be more likely to disregard research results that disagree with it. After all, if everyone knows high dopamine levels cause schizophrenia, then there must be something wrong with the methods of an experiment that indicates otherwise. This type of thinking can contribute to publication bias, a tendency to only publish favorable results and discard findings that don’t support one’s hypothesis. This type of bias can serve to further propagate a consensus hypothesis as potentially contradictory research results are considered quirky aberrations instead of leads worth following. Schizophrenia research doesn't display clear evidence of a large influence of publication bias, but it doesn't seem to be completely clear of its influence, either.

Regardless, one could argue that a concentrated focus on one hypothesis is an important part of scientific investigation. It allows for the organization of our thoughts, and through the reduction of a complicated process into its component parts, helps us to make sense of at least a portion of what is going on. Successful experiments supporting a consensus hypothesis may also inspire increased experimentation in and attention to an area of research, which in and of itself may speed up the process of coming to understand a phenomenon more fully.

However, even if there are benefits to this increased focus, it may be best to still maintain that focus with the caveat in mind that just because a hypothesis is widely accepted does not mean it is correct. It may be useful to constantly remind ourselves that we still know very little about neuroscience, and that in most cases the simpler the hypothesis seems to be, the more likely it is to be lacking. Continuing to seek out greater complexity instead of focusing all our efforts on finding support for an easy-to-explain mechanism may allow us to avoid falling into the trap of the consensus hypothesis, which can in some ways limit the ability of our understanding to grow.

Moncrieff, J. (2009). A Critique of the Dopamine Hypothesis of Schizophrenia and Psychosis Harvard Review of Psychiatry, 17 (3), 214-225 DOI: 10.1080/10673220902979896

Know your brain: Cochlea

Where is the cochlea?

cochlea and cochlea in cross-section. image courtesy of openstax college.

cochlea and cochlea in cross-section. image courtesy of openstax college.

The cochlea is a coiled structure that resembles a snail shell (cochlea comes from the Greek kochlos, which means "snail"); it is found within the inner ear. It is a small--yet complex--structure (about the size of a pea) that consists of three canals that run parallel to one another: the scala vestibuli, scala media, and scala tympani.

What is the cochlea and what does it do?

When sound waves travel through the canal of our outer ear, they hit the tympanic membrane (aka eardrum) and cause it to vibrate. This vibration prompts movement in the ossicles, a trio of tiny bones that transmit the vibration to a structure called the oval window, which sits in the wall of the cochlea. The ossicle bone known as the stapes taps on the oval window to pass the vibration on to the cochlea, all the while using a fine-tuned movement that preserves the frequency of the original sound wave that hit the eardrum.

The cochlea is filled with fluid. Specifically, the scala vestibuli and scala tympani contain a fluid called perilymph, which is similar in composition to cerebrospinal fluid, and the scala media contains endolymph, which more resembles intracellular fluid in terms of its ionic concentrations. When the oval window is depressed by the stapes it creates waves that travel through the fluid of the cochlea, and these waves cause a structure called the basilar membrane to move as well.

The basilar membrane separates the scala tympani from the scala media. When waves flow through the fluid in the cochlea, they create small ripples that travel down the basilar membrane itself (to visualize these ripples imagine the basilar membrane as a rug someone is shaking out). The basilar membrane is structured such that different sections of the membrane respond preferentially to different frequencies of sound. As waves progress down the basilar membrane, they reach their peak and then rapidly diminish in amplitude at the part of the membrane that responds to the frequency of the sound wave created by the original stimulus. In this way, the basilar membrane accurately translates the frequency of sounds picked up by the ear into representative neural activity that can be sent to the brain.

hair cells (on right). image courtesy of openstax college.

hair cells (on right). image courtesy of openstax college.

The translation of the movement of the basilar membrane into electrical impulses occurs in the organ of Corti, which is the receptor organ of the ear. It sits atop the basilar membrane and contains around 16,000 receptor cells known as hair cells. Hair cells are so named because protruding from the top of each cell is a collection of somewhere between 50 and 200 small "hairs" called stereocilia. Hair cell stereocilia have fine fibers, known as tip links, that run between their tips; tip links are also attached to ion channels. When the basilar membrane vibrates, this induces movement of the hair cells, which causes the tip links to pull open the associated ion channels for a fraction of a millisecond. This is long enough to allow ions to rush through the ion channels to cause depolarization of the hair cell. Depolarization of hair cells leads to a release of neurotransmitters and the propagation of the auditory signal. The vestibulocochlear nerve will carry the information regarding the auditory stimulus to the brain, where it will be analyzed and consciously perceived.

Møller, A. (1994). Auditory Neurophysiology Journal of Clinical Neurophysiology, 11 (3), 284-308 DOI: 10.1097/00004691-199405000-00002

Let there be light: how light can affect our mood

If you're looking for an indication of how intricately human physiology is tied to the environment our species evolved in, you need look no further than our circadian clock. For, the internal environment of our body is regulated by 24-hour cycles that closely mirror the time it takes for the earth to rotate once on its axis. Moreover, these cycles are shaped by changes in the external environment (e.g. fluctuating levels of daylight) associated with that rotation. Indeed, this 24-hour cycle regulates everything from sleep to rate of metabolism to hormone release, and it is so refined that it continues even in the absence of environmental cues. In other words, even if you place a person in a room with no windows to see when the sun rises and sets and no clocks to know the time, he will maintain a regular circadian rhythm that approximates 24 hours.

Despite the ability of circadian rhythms to persist in the absence of environmental cues, however, our body clock is very responsive to the presence of light in the external environment. It uses information about illumination levels to synchronize diurnal physiological functions to occur during daylight hours and nocturnal functions to occur during the night. Thus, the presence or absence of light in the environment can indicate whether systems that promote wakefulness or sleep should be activated. In this way, ambient light (or lack thereof) becomes an important signal that can lead to the initiation of an array of biological functions.

It may not be surprising then that abnormalities in environmental illumination (e.g. it is light when the body's clock expects it to be dark) can have a generally disrupting effect on physiological function. Indeed, unexpected changes in light exposure levels have been associated with sleep disturbances, cognitive irregularities, and even mood disorders. Many of these problems are thought to occur due to lack of accord between circadian rhythms and environmental light; however, a role now is also being recognized for the ability of light to affect mood directly, without first influencing circadian rhythms.

Physiology of light detection

For light to be able to influence the 24-hour clock, information about light in the environment must first be communicated to the brain. In non-mammalian vertebrates (e.g. fish, amphibians), there are photoreceptors outside of the eye that can accomplish this task. For example, some animals like lizards have a photoreceptive area below the skin on the top of their heads. This area, sometimes referred to as the third eye, responds to stimulation from light and sends information regarding light in the environment to areas of the brain involved in regulating circadian rhythms.

In humans and other mammals, however, it seems the eyes act as the primary devices for carrying information about light to the brain--even when that information isn't used in the process of image formation. The fact that some blind patients are able to maintain circadian rhythms and display circadian-related physiological changes in response to light stimulation suggests that the retinal mechanism for detecting light for non-image forming functions may involve cells other than the traditional photoreceptors (i.e. rods and cones). While up until about ten years ago it was thought that rods and cones were the only photoreceptive cells in the retina, it is now believed there may be a third class of photoreceptive cell. These cells, called intrinsically photoreceptive retinal ganglion cells (ipRGCs), can respond to light independently of rods and cones. They are thought to have a limited role in conscious sight and image formation, but they may play an important part in transmitting information about environmental light to the brain.

ipRGCs project to various areas of the brain thought to be involved in the coordination of circadian rhythms, but their most important connection is to the suprachiasmatic nuclei (SCN). The SCN are paired structures found in the hypothalamus that each contain only about 10,000 neurons. Although 10,000 neurons is a relatively paltry number compared to other areas of the brain, these combined 20,000 neurons make up what is often referred to as the "master clock" of the body. Through an ingenious mechanism involving cycles of gene transcription and suppression (see here for more about this mechanism), the cells of the SCN independently display circadian patterns of activity, acting as reliable timekeepers for the body. Projections from the SCN to various other brain regions are responsible for coordinating circadian activity throughout the brain.

Although the cells in the SCN are capable of maintaining circadian rhythms on their own, they need information from the external environment to match their oscillatory activity up with the solar day. This is where input from ipRGCs comes in; most of this input is supplied via a pathway that travels directly from the retina to the SCN called the retinohypothalamic tract. This tract uses glutamate signaling to notify the SCN when there is light in the external environment, ensuring SCN activity is in the diurnal phase when there is daylight present.

Thus, there is a complex machinery responsible for maintaining physiological activity on a semblance of a 24-hour schedule and matching that circadian cycle up with what is really going on in the outside world. When the operation of this machinery is disrupted in some way, however, it can contribute to a variety of problems.

Indirect effects of light on mood

The brain has evolved a number of mechanisms that allow circadian rhythms to remain synchronized with the solar day. However, when there are rapid changes in the timing of illumination in the external environment, this can lead to a desynchronization of circadian rhythms. This desynchronization then seems to have a disruptive effect on cognition and mood; thus, these effects are described as indirect effects of light on mood because light must first affect circadian rhythms, which in turn affect mood.

Transmeridian travel and shift work

An example of this type of circadian disruption occurs during rapid transmeridian travel, such as flying from New York to California. Crossing multiple time zones causes the body's clock to become discordant with the solar day; in the case of flying from New York to California the body would expect the sun to go down three hours later than it actually would in the new time zone. This can result in a condition colloquially known as jet lag, but medically referred to by terms that imply circadian disruptions: desynchronosis or circadian dysrhythmia.

Transmeridian travel can lead to a number of both cognitive and physical symptoms. Sleep disturbances afterwards are common, as are other mood disturbances like irritability and fatigue. Physical complaints like headache also frequently occur, and studies have found individuals who undergo transmeridian travel subsequently display decreased physical performance and endurance. Transmeridian travel has even been found to delay ovulation and disrupt the menstrual cycle in women. One study found airline workers who had been exposed to transmeridian travel for four years displayed deficits in cognitive performance, suggesting there may be an accumulative effect of jet lag on cognition.

Similar disruptions in cognition and physiological function can be seen in individuals who are exposed to high levels of nighttime illumination (e.g. those who work a night shift). People who are awake during nighttime hours and attempt to sleep during the day generally experience sleep disturbances that are associated with cognitive deficits and even symptoms of depression. The long-term effects of continued sleep/wake cycle disruption due to shift work involve a variety of negative outcomes, including an increased risk of cancer.

Seasonal affective disorder

In some cases of depression, symptoms begin to appear as the daylight hours become shorter in fall and winter months. The symptoms then often decrease in the spring or summer, and re-occur annually. This type of seasonal oscillation of depressive symptoms is known as seasonal affective disorder (SAD), and circadian rhythms are hypothesized to be at the heart of the affliction. The leading hypotheses regarding the etiology of SAD suggest it is associated with a desynchronization of circadian rhythms caused by seasonal changes in the length of the day.

According to this hypothesis, in patients with SAD circadian rhythms that are influenced by light become delayed when the sun rises later in the winter. However, some cycles (like the sleep-wake cycle) aren't delayed in the same manner, leading to a desynchronization between  biological rhythms and the circadian oscillations of the SCN. One approach to treating SAD that has shown promise has been to expose patients to bright artificial light in the morning. This is meant to mimic the type of morning light exposure patients would receive during the spring and summer, and possibly shift their circadian rhythms (via the retinohypothalamic tract--see above) to regain synchrony. Indeed, studies have found bright light therapy to be just as effective as fluoxetine (Prozac) in treating patients with SAD.

Direct effects of light on mood

In the examples discussed so far, light exposure is hypothesized to lead to changes in mood due to the effects it can have on circadian rhythms. However, it is also becoming recognized that light exposure may be able to directly alter cognition and mood. The mechanisms underlying these effects are still poorly understood, but elucidating them may further aid us in understanding how light may be implicated in mood disorders.

The first studies in this area found that exposure to bright light decreased sleepiness, increased alertness, and improved performance on psychomotor vigilance tasks. More recently, it was observed that exposure to blue wavelength light activated areas of the brain involved in executive functions; another study found that exposure to blue wavelength light increased activity in areas of the brain like the amygdala and hypothalamus during the processing of emotional stimuli.

While it is still unclear what some of these direct effects on brain activity mean in functional terms, awareness of the potential effects of blue wavelength light has led to the investigation of how the use of electronic devices before bed might affect sleep. The results are harrowing for those of us who are prone to use a computer, phone, or e-reader leading up to bedtime: a recent study found reading an e-reader for several hours before bed led to difficulty falling asleep and decreased alertness in the morning as well as caused delays in the timing of the circadian clock.

Thus, it does seem that light is capable of affecting cognition and mood directly, and the effects may be surprisingly extensive. Interestingly, these types of effects have also been observed in studies with blind individuals, suggesting that direct effects of light exposure (like indirect effects) may be triggered by information sent via the non-image forming cells in the retina (e.g. ipRGCs). Despite the fact that this is a pathway by which light can affect mood without first influencing circadian rhythms, however, there is evidence circadian rhythms can still moderate that effect, as the direct effects of light may differ depending on the time of day the exposure occurs.

Light's powerful influence

Research into the effects of light on the brain has identified a potentially important role for light exposure in influencing mood and cognition. However, there is still much to be learned about the ways in which light is capable of exerting these types of effects. Nevertheless, this important area of research has brought to light (no pun intended) a previously unconsidered factor in the etiology of mood disorders. Furthermore, it has begun to raise awareness to the effects light might be having even during seemingly innocuous activities like using electronic devices before bed. When one considers how important a role sunlight has played in the survival of our species, it makes sense that the functioning of our bodies is so closely intertwined with the timing of the solar day. Perhaps what is surprising is that the advent of artificial lighting led us to believe that we could overcome the influence of that relationship. Recent research, however, suggests that our connection to daylight is more powerful than we had imagined.

LeGates, T., Fernandez, D., & Hattar, S. (2014). Light as a central modulator of circadian rhythms, sleep and affect Nature Reviews Neuroscience, 15 (7), 443-454 DOI: 10.1038/nrn3743

The neurobiological underpinnings of suicidal behavior

When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.

The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.

While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.

While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.

Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.

Alterations in neurotransmitter systems

Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.

As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression

Abnormalities in the stress response

Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.

In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.

Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.

One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.

Glial cell abnormalities

While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.

Future directions

Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.

As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.

Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.

Turecki, G. (2014). The molecular bases of the suicidal brain Nature Reviews Neuroscience, 15 (12), 802-816 DOI: 10.1038/nrn3839

2-Minute Neuroscience: The Ventricles

In this video, I cover the ventricles. I discuss the function of the ventricles, which involves production and distribution of cerebrospinal fluid; I also briefly explain the functions of cerebrospinal fluid. I describe the structure of the ventricles, including descriptions of the lateral, third, and fourth ventricles, as well as the means by which the ventricles are connected to one another: the interventricular foramen and cerebral aqueduct. Finally, I mention hydrocephalus, a condition that occurs when cerebrospinal fluid levels in the ventricles get too high.

Know your brain: Spinal cord

Where is the spinal cord?

Spinal cord (in red). image courtesy of William Crochot.

Spinal cord (in red). image courtesy of William Crochot.

The spinal cord runs from the medulla oblongata of the brainstem down to the first or second lumbar vertebrae of the vertebral column (aka the spine). The spinal cord is shorter than the vertebral column, and overall is a surprisingly small structure. It is only about 16.5-17.5 inches long on average, with a diameter of less than 1/2 an inch at its widest point.

What is the spinal cord and what does it do?

The spinal cord is one of the two major components of the central nervous system (the other being the brain); its proper functioning is absolutely essential to a healthy nervous system. The spinal cord contains motor neurons that innervate skeletal muscle and allow for movement as well as motor tracts that carry directives for motor movement down from the brain. The spinal cord also receives all of the sensory information from the periphery of our bodies, and contains pathways by which that sensory information is passed along to the brain.

Motor neurons leave the cord in collections of nerves called ventral rootlets, which then coalesce to form a ventral root. Sensory information is carried by sensory neurons in dorsal roots, which enter the cord in small bundles called dorsal rootlets. The cell bodies for these sensory neurons are clustered together in a structure called the dorsal root ganglion, which is found alongside the spinal cord. The ventral root and dorsal root come together just beyond the dorsal root ganglion (moving away from the cord) to form a spinal nerve.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

Spinal nerves travel to the periphery of the body; there are 31 pairs of spinal nerves in total. Each area of the spinal cord from which a spinal nerve leaves is considered a segment and there are 31 segments in the spinal cord: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal.

The spinal cord terminates in a cone-shaped structure called the conus medullaris, which is usually found at around the first or second lumbar vertebrae (L1-L2). However, the spinal cord (like the brain) is surrounded by protective membranes known as the meninges, and the meningeal layers known as the dura mater and arachnoid mater continue for several more segments (to about the second sacral vertebrae) beyond the end of the cord itself. Because this extension of the meningeal covering of the cord--sometimes referred to as the dural sheath--continues past the end the cord, it creates a cerebrospinal fluid-filled cavity known as the lumbar cistern where there is no cord present. Additionally, although the conus medullaris is found at around L2, there are still several pairs of spinal nerves that must travel to the lower half of the body from the final segments of the cord. These nerves travel through the lumbar cistern; the straggly collection of fibers here is referred to as the cauda equina because it resembles a horse's tail. Cerebrospinal fluid is often taken from the lumbar cistern if it needs to be sampled for testing (e.g. for meningitis). This procedure is known as a lumbar puncture or spinal tap; it is done from the lumbar cistern because there is little risk of damaging the spinal cord by inserting a needle there (since the cord is not present at that level of the vertebral canal).

The spinal cord is attached to the end of the dural sheath by a thin extension of the pia mater known as the filum terminale. The filum terminale also extends from the end of the dural sheath to attach the spinal cord to the tailbone. In both cases, the filum terminale helps to anchor the cord in place.

spinal cord in cross-section.

spinal cord in cross-section.

When you look at the spinal cord in cross-section (at any level) you will see what some describe as an H-shaped or a butterfly-shaped area of grey matter surrounded by white matter. The grey matter consists of cell bodies of motor and sensory neurons, and is divided into three regions. The area closest to the back of the spinal cord is called the posterior horn. This area consists of cell bodies of interneurons whose processes don't leave the spinal cord and neurons whose processes enter ascending tracts to carry sensory information up the cord. The substantia gelatinosa is an area of the posterior horn that is specialized to deal primarily with fibers carrying pain and temperature information.

The area of the grey matter closest to the front of the spinal cord is called the anterior horn. It contains the cell bodies of alpha motor neurons (aka lower motor neurons). These neurons leave the spinal cord in the ventral roots and project to skeletal muscle. They are responsible for all voluntary and involuntary movements.

The section of grey matter between the anterior and posterior horns is referred to as the intermediate grey matter. There is not a clear division between the anterior and posterior horns and the intermediate grey matter, so the intermediate grey matter contains some neurons that have characteristics similar to those found in each of the horns. It also contains a variety of interneurons involved in sensory and motor transmission. But the intermediate grey matter has unique functions as well, for it contains the cell bodies of autonomic neurons that are responsible for mediating involuntary processes in the body. These neurons are involved in internal organ functions that are not generally under conscious control, such as heart rate, respiration, digestion, etc.

The white matter that surrounds the grey matter is made up of bundles of ascending and descending fibers known as funiculi. Although the funiculi serve diverse functions, they are often grouped according to location into the: posterior funiculi, lateral funiculi, and anterior funiculi. Each of these funiculi are made up of a variety of ascending and descending tracts, but the funiculi are frequently associated with a small number of important, well-defined tracts whose fibers are carried within them.

For example, the posterior funiculi contain the posterior columns, important fibers that carry information regarding tactile (i.e. touch) sensations and proprioception to the brain. At the level of the medulla, these fibers form the medial lemniscus, another tract that continues to carry the information on to the thalamus and somatosensory cortex. The whole pathway (from the spinal cord to the somatosensory cortex) is often referred to as the posterior (or dorsal) columns-medial lemniscus system.

The lateral funiculi contain important fibers that carry pain and temperature sensations to the brain. These fibers (some of which enter the lateral funiculi from the substantia gelatinosa) are part of what is called the anterolateral system, which consists of several pathways that carry information regarding painful sensations to various sites in the brain and brainstem. The tracts that are part of the anterolateral system include: the spinothalamic tract, which is important for creating awareness of and identifying the location of painful stimuli; the spinomesencephalic tract, which is involved in inhibiting painful sensations; and the spinoreticular tract, which is involved in directing attention to painful stimuli.

The lateral funiculi also contain an important motor pathway: the corticospinal tract. The corticospinal tract fibers originate in the cerebral cortex (e.g. the precentral gyrus or primary motor cortex) and synapse on alpha motor neurons in the anterior horn. These alpha motor neurons then travel to skeletal muscle to initiate movement, and therefore the corticospinal tract plays an important role in voluntary movement.

The anterior funiculi aren't defined by a specific tract that travels through them. They contain a variety of ascending and descending tracts, including some fibers from the corticospinal tract.

Thus, the spinal cord acts as the intermediary between the brain and the body, and all sensory and motor signals pass through it before reaching their final destination. This is why a healthy spinal cord is crucial and damage to the spinal cord can be debilitating or life threatening.

To learn more about the spinal cord, check out this set of 2-Minute Neuroscience videos:

2-Minute Neuroscience: Exterior of the Spinal Cord

2-Minute Neuroscience: Spinal Cord Cross-section