Know your brain: Cochlea

Where is the cochlea?

cochlea and cochlea in cross-section. image courtesy of openstax college.

cochlea and cochlea in cross-section. image courtesy of openstax college.

The cochlea is a coiled structure that resembles a snail shell (cochlea comes from the Greek kochlos, which means "snail"); it is found within the inner ear. It is a small--yet complex--structure (about the size of a pea) that consists of three canals that run parallel to one another: the scala vestibuli, scala media, and scala tympani.

What is the cochlea and what does it do?

When sound waves travel through the canal of our outer ear, they hit the tympanic membrane (aka eardrum) and cause it to vibrate. This vibration prompts movement in the ossicles, a trio of tiny bones that transmit the vibration to a structure called the oval window, which sits in the wall of the cochlea. The ossicle bone known as the stapes taps on the oval window to pass the vibration on to the cochlea, all the while using a fine-tuned movement that preserves the frequency of the original sound wave that hit the eardrum.

The cochlea is filled with fluid. Specifically, the scala vestibuli and scala tympani contain a fluid called perilymph, which is similar in composition to cerebrospinal fluid, and the scala media contains endolymph, which more resembles intracellular fluid in terms of its ionic concentrations. When the oval window is depressed by the stapes it creates waves that travel through the fluid of the cochlea, and these waves cause a structure called the basilar membrane to move as well.

The basilar membrane separates the scala tympani from the scala media. When waves flow through the fluid in the cochlea, they create small ripples that travel down the basilar membrane itself (to visualize these ripples imagine the basilar membrane as a rug someone is shaking out). The basilar membrane is structured such that different sections of the membrane respond preferentially to different frequencies of sound. As waves progress down the basilar membrane, they reach their peak and then rapidly diminish in amplitude at the part of the membrane that responds to the frequency of the sound wave created by the original stimulus. In this way, the basilar membrane accurately translates the frequency of sounds picked up by the ear into representative neural activity that can be sent to the brain.

hair cells (on right). image courtesy of openstax college.

hair cells (on right). image courtesy of openstax college.

The translation of the movement of the basilar membrane into electrical impulses occurs in the organ of Corti, which is the receptor organ of the ear. It sits atop the basilar membrane and contains around 16,000 receptor cells known as hair cells. Hair cells are so named because protruding from the top of each cell is a collection of somewhere between 50 and 200 small "hairs" called stereocilia. Hair cell stereocilia have fine fibers, known as tip links, that run between their tips; tip links are also attached to ion channels. When the basilar membrane vibrates, this induces movement of the hair cells, which causes the tip links to pull open the associated ion channels for a fraction of a millisecond. This is long enough to allow ions to rush through the ion channels to cause depolarization of the hair cell. Depolarization of hair cells leads to a release of neurotransmitters and the propagation of the auditory signal. The vestibulocochlear nerve will carry the information regarding the auditory stimulus to the brain, where it will be analyzed and consciously perceived.

Møller, A. (1994). Auditory Neurophysiology Journal of Clinical Neurophysiology, 11 (3), 284-308 DOI: 10.1097/00004691-199405000-00002

Let there be light: how light can affect our mood

If you're looking for an indication of how intricately human physiology is tied to the environment our species evolved in, you need look no further than our circadian clock. For, the internal environment of our body is regulated by 24-hour cycles that closely mirror the time it takes for the earth to rotate once on its axis. Moreover, these cycles are shaped by changes in the external environment (e.g. fluctuating levels of daylight) associated with that rotation. Indeed, this 24-hour cycle regulates everything from sleep to rate of metabolism to hormone release, and it is so refined that it continues even in the absence of environmental cues. In other words, even if you place a person in a room with no windows to see when the sun rises and sets and no clocks to know the time, he will maintain a regular circadian rhythm that approximates 24 hours.

Despite the ability of circadian rhythms to persist in the absence of environmental cues, however, our body clock is very responsive to the presence of light in the external environment. It uses information about illumination levels to synchronize diurnal physiological functions to occur during daylight hours and nocturnal functions to occur during the night. Thus, the presence or absence of light in the environment can indicate whether systems that promote wakefulness or sleep should be activated. In this way, ambient light (or lack thereof) becomes an important signal that can lead to the initiation of an array of biological functions.

It may not be surprising then that abnormalities in environmental illumination (e.g. it is light when the body's clock expects it to be dark) can have a generally disrupting effect on physiological function. Indeed, unexpected changes in light exposure levels have been associated with sleep disturbances, cognitive irregularities, and even mood disorders. Many of these problems are thought to occur due to lack of accord between circadian rhythms and environmental light; however, a role now is also being recognized for the ability of light to affect mood directly, without first influencing circadian rhythms.

Physiology of light detection

For light to be able to influence the 24-hour clock, information about light in the environment must first be communicated to the brain. In non-mammalian vertebrates (e.g. fish, amphibians), there are photoreceptors outside of the eye that can accomplish this task. For example, some animals like lizards have a photoreceptive area below the skin on the top of their heads. This area, sometimes referred to as the third eye, responds to stimulation from light and sends information regarding light in the environment to areas of the brain involved in regulating circadian rhythms.

In humans and other mammals, however, it seems the eyes act as the primary devices for carrying information about light to the brain--even when that information isn't used in the process of image formation. The fact that some blind patients are able to maintain circadian rhythms and display circadian-related physiological changes in response to light stimulation suggests that the retinal mechanism for detecting light for non-image forming functions may involve cells other than the traditional photoreceptors (i.e. rods and cones). While up until about ten years ago it was thought that rods and cones were the only photoreceptive cells in the retina, it is now believed there may be a third class of photoreceptive cell. These cells, called intrinsically photoreceptive retinal ganglion cells (ipRGCs), can respond to light independently of rods and cones. They are thought to have a limited role in conscious sight and image formation, but they may play an important part in transmitting information about environmental light to the brain.

ipRGCs project to various areas of the brain thought to be involved in the coordination of circadian rhythms, but their most important connection is to the suprachiasmatic nuclei (SCN). The SCN are paired structures found in the hypothalamus that each contain only about 10,000 neurons. Although 10,000 neurons is a relatively paltry number compared to other areas of the brain, these combined 20,000 neurons make up what is often referred to as the "master clock" of the body. Through an ingenious mechanism involving cycles of gene transcription and suppression (see here for more about this mechanism), the cells of the SCN independently display circadian patterns of activity, acting as reliable timekeepers for the body. Projections from the SCN to various other brain regions are responsible for coordinating circadian activity throughout the brain.

Although the cells in the SCN are capable of maintaining circadian rhythms on their own, they need information from the external environment to match their oscillatory activity up with the solar day. This is where input from ipRGCs comes in; most of this input is supplied via a pathway that travels directly from the retina to the SCN called the retinohypothalamic tract. This tract uses glutamate signaling to notify the SCN when there is light in the external environment, ensuring SCN activity is in the diurnal phase when there is daylight present.

Thus, there is a complex machinery responsible for maintaining physiological activity on a semblance of a 24-hour schedule and matching that circadian cycle up with what is really going on in the outside world. When the operation of this machinery is disrupted in some way, however, it can contribute to a variety of problems.

Indirect effects of light on mood

The brain has evolved a number of mechanisms that allow circadian rhythms to remain synchronized with the solar day. However, when there are rapid changes in the timing of illumination in the external environment, this can lead to a desynchronization of circadian rhythms. This desynchronization then seems to have a disruptive effect on cognition and mood; thus, these effects are described as indirect effects of light on mood because light must first affect circadian rhythms, which in turn affect mood.

Transmeridian travel and shift work

An example of this type of circadian disruption occurs during rapid transmeridian travel, such as flying from New York to California. Crossing multiple time zones causes the body's clock to become discordant with the solar day; in the case of flying from New York to California the body would expect the sun to go down three hours later than it actually would in the new time zone. This can result in a condition colloquially known as jet lag, but medically referred to by terms that imply circadian disruptions: desynchronosis or circadian dysrhythmia.

Transmeridian travel can lead to a number of both cognitive and physical symptoms. Sleep disturbances afterwards are common, as are other mood disturbances like irritability and fatigue. Physical complaints like headache also frequently occur, and studies have found individuals who undergo transmeridian travel subsequently display decreased physical performance and endurance. Transmeridian travel has even been found to delay ovulation and disrupt the menstrual cycle in women. One study found airline workers who had been exposed to transmeridian travel for four years displayed deficits in cognitive performance, suggesting there may be an accumulative effect of jet lag on cognition.

Similar disruptions in cognition and physiological function can be seen in individuals who are exposed to high levels of nighttime illumination (e.g. those who work a night shift). People who are awake during nighttime hours and attempt to sleep during the day generally experience sleep disturbances that are associated with cognitive deficits and even symptoms of depression. The long-term effects of continued sleep/wake cycle disruption due to shift work involve a variety of negative outcomes, including an increased risk of cancer.

Seasonal affective disorder

In some cases of depression, symptoms begin to appear as the daylight hours become shorter in fall and winter months. The symptoms then often decrease in the spring or summer, and re-occur annually. This type of seasonal oscillation of depressive symptoms is known as seasonal affective disorder (SAD), and circadian rhythms are hypothesized to be at the heart of the affliction. The leading hypotheses regarding the etiology of SAD suggest it is associated with a desynchronization of circadian rhythms caused by seasonal changes in the length of the day.

According to this hypothesis, in patients with SAD circadian rhythms that are influenced by light become delayed when the sun rises later in the winter. However, some cycles (like the sleep-wake cycle) aren't delayed in the same manner, leading to a desynchronization between  biological rhythms and the circadian oscillations of the SCN. One approach to treating SAD that has shown promise has been to expose patients to bright artificial light in the morning. This is meant to mimic the type of morning light exposure patients would receive during the spring and summer, and possibly shift their circadian rhythms (via the retinohypothalamic tract--see above) to regain synchrony. Indeed, studies have found bright light therapy to be just as effective as fluoxetine (Prozac) in treating patients with SAD.

Direct effects of light on mood

In the examples discussed so far, light exposure is hypothesized to lead to changes in mood due to the effects it can have on circadian rhythms. However, it is also becoming recognized that light exposure may be able to directly alter cognition and mood. The mechanisms underlying these effects are still poorly understood, but elucidating them may further aid us in understanding how light may be implicated in mood disorders.

The first studies in this area found that exposure to bright light decreased sleepiness, increased alertness, and improved performance on psychomotor vigilance tasks. More recently, it was observed that exposure to blue wavelength light activated areas of the brain involved in executive functions; another study found that exposure to blue wavelength light increased activity in areas of the brain like the amygdala and hypothalamus during the processing of emotional stimuli.

While it is still unclear what some of these direct effects on brain activity mean in functional terms, awareness of the potential effects of blue wavelength light has led to the investigation of how the use of electronic devices before bed might affect sleep. The results are harrowing for those of us who are prone to use a computer, phone, or e-reader leading up to bedtime: a recent study found reading an e-reader for several hours before bed led to difficulty falling asleep and decreased alertness in the morning as well as caused delays in the timing of the circadian clock.

Thus, it does seem that light is capable of affecting cognition and mood directly, and the effects may be surprisingly extensive. Interestingly, these types of effects have also been observed in studies with blind individuals, suggesting that direct effects of light exposure (like indirect effects) may be triggered by information sent via the non-image forming cells in the retina (e.g. ipRGCs). Despite the fact that this is a pathway by which light can affect mood without first influencing circadian rhythms, however, there is evidence circadian rhythms can still moderate that effect, as the direct effects of light may differ depending on the time of day the exposure occurs.

Light's powerful influence

Research into the effects of light on the brain has identified a potentially important role for light exposure in influencing mood and cognition. However, there is still much to be learned about the ways in which light is capable of exerting these types of effects. Nevertheless, this important area of research has brought to light (no pun intended) a previously unconsidered factor in the etiology of mood disorders. Furthermore, it has begun to raise awareness to the effects light might be having even during seemingly innocuous activities like using electronic devices before bed. When one considers how important a role sunlight has played in the survival of our species, it makes sense that the functioning of our bodies is so closely intertwined with the timing of the solar day. Perhaps what is surprising is that the advent of artificial lighting led us to believe that we could overcome the influence of that relationship. Recent research, however, suggests that our connection to daylight is more powerful than we had imagined.

LeGates, T., Fernandez, D., & Hattar, S. (2014). Light as a central modulator of circadian rhythms, sleep and affect Nature Reviews Neuroscience, 15 (7), 443-454 DOI: 10.1038/nrn3743

The neurobiological underpinnings of suicidal behavior

When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.

The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.

While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.

While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.

Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.

Alterations in neurotransmitter systems

Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.

As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression

Abnormalities in the stress response

Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.

In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.

Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.

One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.

Glial cell abnormalities

While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.

Future directions

Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.

As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.

Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.

Turecki, G. (2014). The molecular bases of the suicidal brain Nature Reviews Neuroscience, 15 (12), 802-816 DOI: 10.1038/nrn3839

2-Minute Neuroscience: The Ventricles

In this video, I cover the ventricles. I discuss the function of the ventricles, which involves production and distribution of cerebrospinal fluid; I also briefly explain the functions of cerebrospinal fluid. I describe the structure of the ventricles, including descriptions of the lateral, third, and fourth ventricles, as well as the means by which the ventricles are connected to one another: the interventricular foramen and cerebral aqueduct. Finally, I mention hydrocephalus, a condition that occurs when cerebrospinal fluid levels in the ventricles get too high.

Know your brain: Spinal cord

Where is the spinal cord?

Spinal cord (in red). image courtesy of William Crochot.

Spinal cord (in red). image courtesy of William Crochot.

The spinal cord runs from the medulla oblongata of the brainstem down to the first or second lumbar vertebrae of the vertebral column (aka the spine). The spinal cord is shorter than the vertebral column, and overall is a surprisingly small structure. It is only about 16.5-17.5 inches long on average, with a diameter of less than 1/2 an inch at its widest point.

What is the spinal cord and what does it do?

The spinal cord is one of the two major components of the central nervous system (the other being the brain); its proper functioning is absolutely essential to a healthy nervous system. The spinal cord contains motor neurons that innervate skeletal muscle and allow for movement as well as motor tracts that carry directives for motor movement down from the brain. The spinal cord also receives all of the sensory information from the periphery of our bodies, and contains pathways by which that sensory information is passed along to the brain.

Motor neurons leave the cord in collections of nerves called ventral rootlets, which then coalesce to form a ventral root. Sensory information is carried by sensory neurons in dorsal roots, which enter the cord in small bundles called dorsal rootlets. The cell bodies for these sensory neurons are clustered together in a structure called the dorsal root ganglion, which is found alongside the spinal cord. The ventral root and dorsal root come together just beyond the dorsal root ganglion (moving away from the cord) to form a spinal nerve.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

Spinal nerves travel to the periphery of the body; there are 31 pairs of spinal nerves in total. Each area of the spinal cord from which a spinal nerve leaves is considered a segment and there are 31 segments in the spinal cord: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal.

The spinal cord terminates in a cone-shaped structure called the conus medullaris, which is usually found at around the first or second lumbar vertebrae (L1-L2). However, the spinal cord (like the brain) is surrounded by protective membranes known as the meninges, and the meningeal layers known as the dura mater and arachnoid mater continue for several more segments (to about the second sacral vertebrae) beyond the end of the cord itself. Because this extension of the meningeal covering of the cord--sometimes referred to as the dural sheath--continues past the end the cord, it creates a cerebrospinal fluid-filled cavity known as the lumbar cistern where there is no cord present. Additionally, although the conus medullaris is found at around L2, there are still several pairs of spinal nerves that must travel to the lower half of the body from the final segments of the cord. These nerves travel through the lumbar cistern; the straggly collection of fibers here is referred to as the cauda equina because it resembles a horse's tail. Cerebrospinal fluid is often taken from the lumbar cistern if it needs to be sampled for testing (e.g. for meningitis). This procedure is known as a lumbar puncture or spinal tap; it is done from the lumbar cistern because there is little risk of damaging the spinal cord by inserting a needle there (since the cord is not present at that level of the vertebral canal).

The spinal cord is attached to the end of the dural sheath by a thin extension of the pia mater known as the filum terminale. The filum terminale also extends from the end of the dural sheath to attach the spinal cord to the tailbone. In both cases, the filum terminale helps to anchor the cord in place.

spinal cord in cross-section.

spinal cord in cross-section.

When you look at the spinal cord in cross-section (at any level) you will see what some describe as an H-shaped or a butterfly-shaped area of grey matter surrounded by white matter. The grey matter consists of cell bodies of motor and sensory neurons, and is divided into three regions. The area closest to the back of the spinal cord is called the posterior horn. This area consists of cell bodies of interneurons whose processes don't leave the spinal cord and neurons whose processes enter ascending tracts to carry sensory information up the cord. The substantia gelatinosa is an area of the posterior horn that is specialized to deal primarily with fibers carrying pain and temperature information.

The area of the grey matter closest to the front of the spinal cord is called the anterior horn. It contains the cell bodies of alpha motor neurons (aka lower motor neurons). These neurons leave the spinal cord in the ventral roots and project to skeletal muscle. They are responsible for all voluntary and involuntary movements.

The section of grey matter between the anterior and posterior horns is referred to as the intermediate grey matter. There is not a clear division between the anterior and posterior horns and the intermediate grey matter, so the intermediate grey matter contains some neurons that have characteristics similar to those found in each of the horns. It also contains a variety of interneurons involved in sensory and motor transmission. But the intermediate grey matter has unique functions as well, for it contains the cell bodies of autonomic neurons that are responsible for mediating involuntary processes in the body. These neurons are involved in internal organ functions that are not generally under conscious control, such as heart rate, respiration, digestion, etc.

The white matter that surrounds the grey matter is made up of bundles of ascending and descending fibers known as funiculi. Although the funiculi serve diverse functions, they are often grouped according to location into the: posterior funiculi, lateral funiculi, and anterior funiculi. Each of these funiculi are made up of a variety of ascending and descending tracts, but the funiculi are frequently associated with a small number of important, well-defined tracts whose fibers are carried within them.

For example, the posterior funiculi contain the posterior columns, important fibers that carry information regarding tactile (i.e. touch) sensations and proprioception to the brain. At the level of the medulla, these fibers form the medial lemniscus, another tract that continues to carry the information on to the thalamus and somatosensory cortex. The whole pathway (from the spinal cord to the somatosensory cortex) is often referred to as the posterior (or dorsal) columns-medial lemniscus system.

The lateral funiculi contain important fibers that carry pain and temperature sensations to the brain. These fibers (some of which enter the lateral funiculi from the substantia gelatinosa) are part of what is called the anterolateral system, which consists of several pathways that carry information regarding painful sensations to various sites in the brain and brainstem. The tracts that are part of the anterolateral system include: the spinothalamic tract, which is important for creating awareness of and identifying the location of painful stimuli; the spinomesencephalic tract, which is involved in inhibiting painful sensations; and the spinoreticular tract, which is involved in directing attention to painful stimuli.

The lateral funiculi also contain an important motor pathway: the corticospinal tract. The corticospinal tract fibers originate in the cerebral cortex (e.g. the precentral gyrus or primary motor cortex) and synapse on alpha motor neurons in the anterior horn. These alpha motor neurons then travel to skeletal muscle to initiate movement, and therefore the corticospinal tract plays an important role in voluntary movement.

The anterior funiculi aren't defined by a specific tract that travels through them. They contain a variety of ascending and descending tracts, including some fibers from the corticospinal tract.

Thus, the spinal cord acts as the intermediary between the brain and the body, and all sensory and motor signals pass through it before reaching their final destination. This is why a healthy spinal cord is crucial and damage to the spinal cord can be debilitating or life threatening.

To learn more about the spinal cord, check out this set of 2-Minute Neuroscience videos:

2-Minute Neuroscience: Exterior of the Spinal Cord

2-Minute Neuroscience: Spinal Cord Cross-section

 

 

New approaches to epilepsy treatment: optogenetics and DREADDs

Epilepsy refers to a group of disorders that are characterized by recurrent seizures. It is a relatively common neurological condition, and is considered the most common serious (implying that there is a risk of mortality) brain disorder, affecting around 2.2 million Americans.

The seizures associated with epilepsy are not homogenous; they can have a drastically different presentation depending on the patient, the part of the brain the seizure originates in, and how much of the brain the seizure affects. For example, seizures can involve clonic activity (i.e. jerking movements), tonic activity (i.e. rigid contraction of muscles), atonia (i.e. loss of muscle activity), or any combination of motor movements and/or loss of motor activity. On the other hand, they can simply be associated with a brief and subtle loss of consciousness, as in the case of an absence seizure.

One attribute that all seizures have in common, however, is excessive neural activity. Seizures are generally characterized by an increased rate of firing in a population of neurons and/or synchronous firing (i.e. neurons that are normally activated at disparate times are all firing together, leading to large spikes in neural activity) by a neuronal population. Because seizures involve an excessive level of neural activity, ictogenesis (i.e. the generation of seizures) has commonly been considered to involve either the direct excitation of neurons or the failure of a mechanism that inhibits the excitation of neurons.

Pharmacological treatments for epilepsy have been designed from this perspective, and have generally involved drugs that either decrease neural activation or increase neural inhibition. For example, drugs like carbamazepine and lamotrigine treat epilepsy by reducing activity at sodium channels in neurons, which makes neurons less likely to fire action potentials and leads to less overall neuronal activity. Other drugs, like phenobarbital and lorazepam, actively promote neural inhibition by increasing the stimulation of gamma-aminobutyric acid (GABA) receptors. GABA receptor activation typically makes neurons less likely to fire, which also reduces overall neural activity.

Pharmacological treatments for epilepsy, however, leave much to be desired. The side effects associated with them range from minor (e.g. fatigue) to serious (e.g. liver failure), and about 30% of epilepsy cases don't even respond to current pharmacological treatments. Surgical options (e.g. removing an area of brain tissue where seizures originate) can be considered in severe cases. Clearly, however, this is an irreversible treatment and also one that lacks specificity, which means that some potentially harmless (but important) brain tissue is likely to be removed along with areas from where seizures are emerging. Although surgical procedures can allow about 60-70% of patients to be seizure free within a year after the procedure, after 10 years more than half of patients begin to experience seizures again.

One of the major limitations to the current approaches to treating epilepsy is that they lack specificity. For, even if seizure activity can be traced back to excess neural excitation or deficiencies in neural inhibition, it is clear that these problems are not occurring all of the time because--except in rare cases--seizures are intermittent and represent only a small percentage of overall brain activity. Drugs that increase inhibition or reduce excitation, however, are having these effects continually (as is surgery, of course). Thus, the treatment of epilepsy is a rather crude approach that involves exerting a constant effect on the brain in the hopes of preventing a relatively rare event.

Because of this, efforts at designing new treatments for epilepsy have focused on more selective techniques. Although these approaches--which hypothetically involve targeting only neurons involved in ictogenesis--are still a long way from being used in humans, there is some promise associated with them. One method, optogenetics, targets seizure activity by incorporating light-sensitive proteins into neurons and then controlling their activity with the application of light. Another approach, designer receptors exclusively activated by designer drugs (DREADDs), focuses on ictogenesis by incorporating genetically engineered receptors  that only respond to a specific ligand into neurons and then controlling neuronal activity through the administration of that ligand.

Optogenetics for the treatment of epilepsy

Optogenetics is a field that combines insights from optics and genetics to manipulate the activity of neurons. It generally involves the use of gene therapy techniques to promote the expression of genes that encode for light sensitive proteins called opsins. In most cases, genes for opsins are carried into an organism after being incorporated into a virus' DNA (the virus in this case is known as a viral vector) or an animal is genetically engineered to express opsin genes in certain neurons from birth. Opsin expression can be targeted to specific cell types and the proteins can be used to create receptors or ion channels that are sensitive to light. When light is delivered to these neurons--either via an optical fiber inserted into the brain or with newer technologies that allow wireless external delivery of light--the opsins are activated. This allows exposure to light to act like an on-off switch for the neurons in which opsins are expressed, making them suitable for experimental or therapeutic manipulation.

One way optogenetics can be used as a treatment for epilepsy is by promoting the expression of a light-sensitive ion channel that, when activated, allows a flow of negatively charged chloride ions into the cell. This hyperpolarizes the neuron and makes it less likely to fire an action potential. Alternatively, opsin ion channels can be expressed on GABA neurons that, when activated, cause increased GABA activity and thus promote general neuronal inhibition. Both of these approaches lead to reduced activity of neuronal populations, potentially decreasing the excessive activity associated with seizures.

When linked to some method of seizure detection, optogenetics can be used to inhibit seizure activity at the first indication of its occurrence. This has already been achieved in experimental animals. For example, Krook-Magnuson et al. (2012) promoted either the expression of inhibitory channel opsins or opsins that activate GABA neurons in different groups of mice, then monitored seizure activity using electroencephalography (EEG) after the injection of a substance that promotes seizures. When seizure activity was detected on the EEG, it automatically triggered the application of light to activate the opsins. In both groups (inhibitory channel opsins and excitatory opsins on GABA neurons), light application rapidly stopped seizures.

Thus, when combined with seizure activity monitoring, optogenetics provides a way to selectively control neural excitation, cutting seizures off as soon as they begin. Optogenetics is still a relatively new field, however, and the work in this area has not yet translated into clinical approaches with humans. There are some significant hurdles to overcome before that can happen. One involves the need for a device that can non-invasively and effectively deliver light, another concerns the need for a non-stationary device that can monitor seizure activity. Advances in wireless light delivery, however, have already been made, and an implantable device to monitor seizure activity in humans was recently tested for the first time. Therefore, while this technology is not ready to be applied to epilepsy treatment in humans yet, its use is feasible in the not-so-distant future.

DREADDs for the treatment of epilepsy

Designer receptors exclusively activated by designer drugs, or DREADDs, are another approach that addresses the desire for specificity in epilepsy treatment. The use of DREADDs involves the manipulation of genes that encode for neurotransmitter receptors, then the forced expression of those mutated genes in an experimental animal. Receptors can be engineered so they no longer respond to their natural ligand, but instead only respond to a synthetic, exogenously administered drug. DREADD expression can be targeted to specific cell populations and, like the optogenetic methods discussed above, the receptors can be used to activate inhibitory neurons or inhibit excitatory neurons.

For example, Katzel et al. (2014) modified an inhibitory muscarinic acetylcholine receptor to no longer respond to acetylcholine but instead only to a synthetic ligand called clozapine-N-oxide (CNO); they then promoted the expression of this receptor in the motor cortices of rats. They administered a seizure-causing substance, then administered CNO, and found that CNO administration significantly reduced seizure activity.

Therefore, it seems that DREADDs also have potential for the targeted treatment of epilepsy. Because activation of DREADDs only requires taking a pill, it is considered less invasive than current optogenetic approaches. However, optogenetics possesses greater temporal specificity in that it can be activated immediately upon the onset of seizure activity and terminated just as quickly. Synthetic ligands for DREADDs, on the other hand, must be administered in advance of ictogenesis to ensure the drug is available to inhibit seizure activity when it begins, and will remain active in a patient's system until the drug is metabolized by the body.

Just as with optogenetics, though, there are some hurdles that must be overcome for DREADD use to translate into the clinical treatment of epilepsy. For example, individuals tend to vary considerably in how quickly they metabolize drugs. Thus, there might be some variation in the time span of protection offered by administration of a DREADD ligand, which in the case of potentially severe seizures could be dangerous. Also, although the ligands used for DREADD activation are chosen based on their selectivity for the designer receptor, a metabolite of CNO is clozapine, a commonly-used antipsychotic drug that also activates other receptors. In rodents, this did not translate into side effects, but the potential for metabolites of synthetic ligands to be biologically active must be considered when attempting to apply the technology to human populations. 

Optogenetics and DREADDs both represent intriguing approaches to treating epilepsy in the future. The intrigue stems primarily from their ability to only target certain cells, which is likely to reduce the occurrence of side effects. Regardless, even if these technologies aren't able to be used to treat humans for a long time--or ever--they still have a place in epilepsy research. For, the use of these tools also allows us more control over seizures in experimental animals, which makes a more thorough dissection of the seizure process possible. At the very least, this should provide more insight into a dangerous, yet relatively common, neurological disorder.

Krook-Magnuson, E., & Soltesz, I. (2015). Beyond the hammer and the scalpel: selective circuit control for the epilepsies Nature Neuroscience, 18 (3), 331-338 DOI: 10.1038/nn.3943

2-Minute Neuroscience: Directional Terms in Neuroscience

In this video, I cover directional terms in neuroscience. I discuss terms that are consistent throughout the nervous system: superior, inferior, anterior, posterior, medial and lateral. I also cover terms that change their meaning slightly depending on whether we are looking at the brain or spinal cord: dorsal, ventral, rostral, and caudal. Finally, I discuss three types of sections the brain is commonly examined in: sagittal, horizontal/transverse, and coronal/frontal.

Know your brain: Pineal gland

Where is the pineal gland?

Pineal gland (in red). Image courtesy of life science databases.

Pineal gland (in red). Image courtesy of life science databases.

The pineal gland is considered part of the epithalamus, which is one the main structures that makes up the diencephalon. The pineal gland was so named because it has a pine-cone like appearance. Unlike many structures in the brain, the pineal gland is unpaired; in other words, many brain structures like the hippocampus or amygdala are symmetrically paired with another copy of the organ in the other hemisphere of the brain. There is only one pineal gland, however, and it sits right on the midline of the brain.

What is the pineal gland and what does it do?

The solitary nature and unknown function of the pineal gland contributed to the French philosopher Renee Descartes calling it the "seat of the soul" and suggesting it was the place where the immaterial soul communicated with the physical body. Descartes' ideas about the pineal gland were never widely accepted by his contemporaries, however, and today the function most frequently associated with the pineal gland is the secretion of the hormone melatonin, which is involved in the regulation of circadian rhythms.

There are no neurons that leave the pineal gland to carry signals to other areas of the brain. Instead, the main output of the pineal gland--and the way it communicates with the rest of the nervous system--is melatonin. The pineal gland is made up primarily of secretory cells called pinealocytes, which secrete melatonin at varying rates throughout our 24-hour cycle. The highest rates of melatonin secretion occur in the middle of the night; they begin to decrease as it gets closer to dawn. This schedule of melatonin release is maintained based on information about the amount of light in the environment that the pineal gland receives from the retina. The retina sends this information to a nucleus in the hypothalamus called the suprachiasmatic nucleus (SCN), and from there it takes a convoluted path to the pineal gland.

In addition to sending information about ambient lighting to the pineal gland, the SCN also controls circadian rhythms. The SCN has receptors for melatonin, and it uses the melatonin signal to attain information about the time of day. Because melatonin levels are highest during the hours of darkness, the SCN can use melatonin activity as a sign that our circadian rhythm should be in its nocturnal stage. In this way, melatonin secretion can act as an important indicator if one's circadian rhythm is not in sync with the environment (e.g. if high levels of secretion are occurring but the person is still wide awake). This happens when, for example, someone has to adapt to a new 24-hour cycle after flying across several time zones. Exogenously administered melatonin, in fact, has been explored as a way of speeding up the process of adapting to a new sleep-wake cycle, with some success.

Just as melatonin secretion can provide information about the time of day, the nightly duration of melatonin secretion can provide information about the season of the year. Because longer periods of darkness occur in winter, the duration of melatonin secretion at night in the winter is slightly longer than it is in the summer. This is used as a signal in animals that are considered photoperiodic, meaning they experience biological and behavioral changes in response to the changing seasons. For example, many rodents suppress sexual activity during the winter months; it has been shown that removal of the pineal gland in rodents prevents this suppression from occurring. This suggests that melatonin secretion from the pineal gland serves as a sort of biological calendar in rodents, in the process helping to regulate their seasonal behavior. It is not clear that this function for melatonin has a great deal of relevance for humans, who are not considered photoperiodic. However, due to the onset of depressive symptoms during winter in those with seasonal affective disorder, abnormal melatonin secretion has been suspected as playing a role in the disorder, suggesting that it is within the realm of possibility that seasonal changes in melatonin secretion also affect human behavior.

Due to its close association with nighttime and circadian rhythms, melatonin has been investigated as playing a role in promoting sleep. Some have hypothesized that melatonin secretion may facilitate sleep by inhibiting activity in the SCN that promotes wakefulness. However, the true relationship between melatonin and sleep is unclear. In nocturnal animals, melatonin levels are still highest at night, suggesting a role for melatonin in circadian rhythms that does not necessarily involve sleep regulation. Many studies have investigated the effects of administering melatonin on sleep, and although there are some indications it may be effective in treating mild sleep disturbances, the results have been mixed (for example, see Ferracioli-Oda et al., 2013 and Buscemi et al., 2005).

Proper levels of melatonin secretion are important for human health, and the hormone is involved in a wide range of processes not discussed here. Perhaps because the pineal gland is highly specialized, focusing only on melatonin secretion, its importance is sometimes overlooked. The significance of melatonin in maintaining circadian rhythms, however, and the pineal gland's role in producing it, suggest that the pineal gland is an essential structure for the health of the central nervous system.

Dora Sapède,, & Elise Cau (2013). The Pineal Gland from Development to Function Current Topics in Developmental Biology DOI: 10.1016/B978-0-12-416021-7.00005-5

2-Minute Neuroscience: Spinal Cord Cross-section

In this video, I cover the spinal cord in cross-section. I discuss how the spinal cord is composed of grey and white matter. The grey matter is divided into 3 regions: the posterior horn, anterior horn, and intermediate grey matter. The white matter is divided into the posterior, anterior, and lateral funiculi. I describe all of these subdivisions and the functions they are primarily involved in.