The neurobiological underpinnings of suicidal behavior

When you consider that so much of our energy and such a large portion of our behavioral repertoire is devoted to ways of ensuring our survival, suicide appears to be perhaps the most inexplicable human behavior. What would make this human machine--which most of the time seems to be resolutely programmed to scratch, claw, and fight to endure through even the most dire situations--so easily decide to give it all up, even when the circumstances may not objectively seem all that desperate? Suicide is a difficult behavior to justify rationally, and yet it is shockingly common. More people throughout the world end their lives by suicide each year than are killed by homicide and wars combined.

The multitudinous influences that are thought to contribute to suicidal behavior are also very convoluted and difficult to untangle. Clearly, among different individuals the factors that lead to an act of suicide will vary considerably; nevertheless, there are some variables that are thought to generally increase the risk of suicidal behavior. A number of studies have, for example, demonstrated that genetic factors are associated with a predisposition to suicidal behavior. Also, early-life adversity--like sexual abuse, physical abuse, or severe neglect--has been strongly linked to suicide. However, even among groups with higher suicide risk there is a great deal of variability, which adds to the complexity of the issue. For example, personality traits like impulsiveness and aggression have been associated with an increased risk of suicide--but this relationship is seen primarily in younger people. It is not as apparent in older individuals who display suicidal behavior; they are often characterized by higher levels of harm avoidance instead of risk-taking.

While there are a number of predisposing factors involving personal characteristics or previous life events that make suicidal ideation and behavior more likely, there are also factors that immediately precede a suicide attempt which are thought to be directly linked to the transition from thinking about suicide to acting on those thoughts. Of course, some of those factors are likely to involve changes in neurobiology and neurochemistry that cause suicide--which may have previously just been an occasional thought--to become the focus of a present-moment plan that is sometimes borne out with great urgency and determination. And, while it is important to be able to identify influences that predispose individuals to suicidal thinking in general, an understanding of the neurobiological factors that precipitate a suicide attempt might open the door for treatments designed to protect an individual from acting on (or experiencing) sudden impulses to complete a suicide.

While the distally predisposing factors for suicidal behavior are difficult to identify due to the myriad influences involved, however, the proximal neurobiological influences are hard to pinpoint due both to their complexity and the fact that a suicidal crisis is often short-lived and difficult to study. The most direct way to investigate changes in the suicidal brain would be to look at brains of individuals who are suicide completers (i.e. those who are now deceased due to suicide). One reason for focusing on suicide completers is that we can expect some neurochemical--and possibly psychological--differences between suicide completers and those who attempted suicide but are still alive. However, working with postmortem brains has its own limitations: obtaining accurate background information may be challenging without the ability to interview the patient, there may be effects on the brain (e.g. from the process of death and its associated trauma or from drugs/medications taken before death) that may make it hard to isolate factors involved in provoking one towards suicide, and the limitation of only being able to examine the brain at one time makes causal interpretations difficult.

Regardless, investigations into irregularities in the brains of those who exhibit suicidal behavior (both attempters and completers) have identified several possible contributing factors that may influence the decision to act on suicidal thoughts. Many of these factors are also implicated in depressed states, as most suicidal individuals display some characteristics of a depressed mood even if they don't meet the criteria for a diagnosis of major depressive disorder. (This, of course, adds another layer of complexity to interpretation as it is difficult to determine if suicide-related factors are simply characteristics of a depressed mood and not solely related to suicidal actions.) The role of each of these factors in suicidal behavior is still hypothetical, and the relative contribution of each is unknown. However, it is thought that some--or all--of them may be implicated in bringing about the brain state associated with suicidal actions.

Alterations in neurotransmitter systems

Abnormalities in the serotonin system have long been linked to depressive behavior, despite more recent doubts about the central role of serotonin in the etiology of depression. Similarly, there appear to be some anomalies in the serotonin system in the brains of suicidal individuals. In an early study on alterations in the serotonin system in depressed patients, Asberg et al. found that patients with lows levels of 5-hydroxyindoleacetic acid, the primary metabolite of serotonin (and thus often used as a proxy measure of serotonin levels), were significantly more likely to attempt suicide. Additionally, those who survive a suicide attempt display a diminished response to the administration of fenfluramine, which is a serotonin agonist that in a typical brain prompts increased serotonin release. A number of neuroimaging studies have also detected reduced serotonin receptor availability in the brains of suicidal patients. This evidence all suggests that abnormalities in the serotonin system play some role in suicidal behavior, although the specifics of that role remain unknown.

As we have learned from investigations of depression, however, it is important to avoid focusing too much on one-neurotransmitter explanations of behavior. Accordingly, a number of other neurotransmitter abnormalities have been detected in suicidal patients as well. For example, gene expression analyses in postmortem brains of individuals who died by suicide have identified altered expression of genes encoding for GABA and glutamate receptors in various areas of the brain. Although the consequences of these variations in gene expression is unknown, abnormalities in GABA and glutamate signaling have both also been hypothesized to play a role in depression

Abnormalities in the stress response

Irregularities in the stress response have long been implicated in depression, and thus it may not be surprising that stress system anomalies have been observed in patients exhibiting suicidal behavior as well. The hypothalamic-pituitary-adrenal (HPA) axis is a network that connects the hypothalamus, pituitary gland, and adrenal glands; it is activated during stressful experiences. When the HPA axis is stimulated, corticotropin-releasing hormone is secreted from the hypothalamus, which causes the pituitary gland to secrete adrenocorticotropic hormone, which then prompts the adrenal glands to release the stress hormone cortisol. In depressed patients, cortisol levels are generally higher than normal, suggesting the HPA axis is hyperactive; this may be indicative of the patient being in a state of chronic stress.

In suicidal individuals, the HPA axis seems to be dysregulated as well. For example, in one study the HPA activity of a group of psychiatric inpatients was tested using what is known as the dexamethasone suppression test (DST). In this procedure, patients are injected with dexamethasone, a synthetic hormone that should act to suppress cortisol secretion if HPA axis activity is normal; if it does not do so, however, it suggests the HPA axis is hyperactive. Out of 78 patients, 32 displayed abnormal HPA activity on the DST. Over the next 15 years, 26.8% of the individuals with abnormal HPA activity committed suicide, while only 2.9% of the individuals with normal DST results killed themselves.

Another system involved in stress responses that may display irregularities in suicidal individuals is the polyamine stress response (PSR). Polyamines are molecules that are involved in a number of essential cellular functions; their potential role in psychiatric conditions has only been recognized in the past few decades. It is believed that the activation of the PSR and the associated increases in levels of polyamines in the brain may be beneficial, serving a protective role in reducing the impact of a stressor on the brain. And, there appear to be abnormalities in the PSR in the brains of those who have committed suicide. Because the PSR and its role in psychiatric conditions is still just beginning to be understood, however, it is unclear what these alterations in the PSR might mean; future investigations will attempt to elucidate the connection between PSR abnormalities and suicidal behavior.

One of the consequences of stress is the initiation of an inflammatory response. This is thought to be an adaptive reaction to stress, as the stress system likely evolved to deal primarily with physical trauma, and the body would have benefited from reflexive stimulation of the immune system in cases where physical damage had been sustained. This immune system activation would prepare the body to fight off infection that could occur due to potential tissue damage (the inflammatory response is the first step in preventing infection). Thus, it may not be surprising that suicidal patients often display markers of inflammation in the brain. This inflammatory reaction may on its own promote brain changes that increase suicide risk, or it may just be a corollary of the activation of the stress system.

Glial cell abnormalities

While we have a tendency to focus on irregularities in neurons and neuronal communication when investigating the causes of behavior, it is becoming more widely recognized that glial cells also play an essential role in healthy brain function. Accordingly, anomalies in glial cells have been noted in the brains of suicidal patients. Several studies, for example, have identified deficits in the structure or function of astrocytes in the suicidal brain. One study found that cortical astrocytes in post-mortem brains of suicide patients displayed altered morphology. Their enlarged cell bodies and other morphological abnormalities were consistent with the hypothesis that they had been affected by local inflammation. Analyses of gene expression in the postmortem brains of suicide victims also found that genes associated almost exclusively with astrocytes were differentially expressed. While the implications of these studies are not yet fully clear, abnormalities in glial cells represents another area of investigation in our attempts to understand what is happening in the suicidal brain.

Future directions

Irregularities in neurotransmitter systems, a hyperactive stress response, and anomalous glial cell morphology and density all may be factors that contribute to the suicidal phenotype. But it is unclear at this point if any one of these variables is the factor that determines the transition from suicidal ideation to suicidal behavior. It is more likely that they all may contribute to large-scale changes throughout the brain that lead to suicidal activity. Of course, all of the factors mentioned above may simply be associated with symptoms (like depressed mood) commonly seen in suicidal individuals, and the true culprit for provoking suicidal actions could be a different mechanism altogether, of which we are still unaware.

As mentioned above, this area of research is fraught with difficulties as the brains of suicide completers can only be studied postmortem. One research approach that attempts to circumvent this obstacle while still providing relevant information on the suicidal brain involves the study of pharmacological agents that reduce the risk of suicide. For, if a drug reduces the risk of suicide then perhaps it is reversing or diminishing the impact of neurobiological processes that trigger the event. One example of such a drug is lithium. Lithium is commonly used to treat bipolar disorder but is also recognized to reduce the risk of suicide in individuals who have a mood disorder. Gaining a better understanding of the mechanism of action that underlies this effect might allow for a better understanding of the neurobiology of suicidal behavior as well. Additionally, ketamine is a substance that appears to have fast-acting (within two hours after administration) antidepressant action and also may cause a rapid reduction (as soon as 40 minutes after administration) in suicidal thinking. Understanding how a drug can so quickly cause a shift away from suicidal thoughts may also be able to shed some light on processes that underlie suicidal actions.

Whatever the neurobiological underpinnings of suicidal behavior may be, the search for it should have some urgency about it. Suicide was the 10th leading cause of death in 2013, and yet it seems like a treatment for suicidal behavior specifically is not approached with the same fervor as treatments for other leading causes of death, like Parkinson's disease, that actually don't lead to as many deaths per year as suicide. Perhaps many consider suicide a fact of life, as something that will always afflict a subset of the population, or perhaps the focus is primarily directed toward treating depression with the assumption that better management of depression will lead to a reduction in suicide attempts. However, if we can come to understand what really happens in the brain of someone immediately before he makes the fatal decision to kill himself, treatment to specifically reduce the risk of suicide--regardless of the underlying disorder--is not out of the realm of possibility. Thus, it seems like a goal worth striving for.

Turecki, G. (2014). The molecular bases of the suicidal brain Nature Reviews Neuroscience, 15 (12), 802-816 DOI: 10.1038/nrn3839

2-Minute Neuroscience: The Ventricles

In this video, I cover the ventricles. I discuss the function of the ventricles, which involves production and distribution of cerebrospinal fluid; I also briefly explain the functions of cerebrospinal fluid. I describe the structure of the ventricles, including descriptions of the lateral, third, and fourth ventricles, as well as the means by which the ventricles are connected to one another: the interventricular foramen and cerebral aqueduct. Finally, I mention hydrocephalus, a condition that occurs when cerebrospinal fluid levels in the ventricles get too high.

Know your brain: Spinal cord

Where is the spinal cord?

Spinal cord (in red). image courtesy of William Crochot.

Spinal cord (in red). image courtesy of William Crochot.

The spinal cord runs from the medulla oblongata of the brainstem down to the first or second lumbar vertebrae of the vertebral column (aka the spine). The spinal cord is shorter than the vertebral column, and overall is a surprisingly small structure. It is only about 16.5-17.5 inches long on average, with a diameter of less than 1/2 an inch at its widest point.

What is the spinal cord and what does it do?

The spinal cord is one of the two major components of the central nervous system (the other being the brain); its proper functioning is absolutely essential to a healthy nervous system. The spinal cord contains motor neurons that innervate skeletal muscle and allow for movement as well as motor tracts that carry directives for motor movement down from the brain. The spinal cord also receives all of the sensory information from the periphery of our bodies, and contains pathways by which that sensory information is passed along to the brain.

Motor neurons leave the cord in collections of nerves called ventral rootlets, which then coalesce to form a ventral root. Sensory information is carried by sensory neurons in dorsal roots, which enter the cord in small bundles called dorsal rootlets. The cell bodies for these sensory neurons are clustered together in a structure called the dorsal root ganglion, which is found alongside the spinal cord. The ventral root and dorsal root come together just beyond the dorsal root ganglion (moving away from the cord) to form a spinal nerve.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

spinal nerves by spinal cord segment they emerge from. Red = cervical, blue = thoracic, pink = lumbar, green = sacral.

Spinal nerves travel to the periphery of the body; there are 31 pairs of spinal nerves in total. Each area of the spinal cord from which a spinal nerve leaves is considered a segment and there are 31 segments in the spinal cord: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal.

The spinal cord terminates in a cone-shaped structure called the conus medullaris, which is usually found at around the first or second lumbar vertebrae (L1-L2). However, the spinal cord (like the brain) is surrounded by protective membranes known as the meninges, and the meningeal layers known as the dura mater and arachnoid mater continue for several more segments (to about the second sacral vertebrae) beyond the end of the cord itself. Because this extension of the meningeal covering of the cord--sometimes referred to as the dural sheath--continues past the end the cord, it creates a cerebrospinal fluid-filled cavity known as the lumbar cistern where there is no cord present. Additionally, although the conus medullaris is found at around L2, there are still several pairs of spinal nerves that must travel to the lower half of the body from the final segments of the cord. These nerves travel through the lumbar cistern; the straggly collection of fibers here is referred to as the cauda equina because it resembles a horse's tail. Cerebrospinal fluid is often taken from the lumbar cistern if it needs to be sampled for testing (e.g. for meningitis). This procedure is known as a lumbar puncture or spinal tap; it is done from the lumbar cistern because there is little risk of damaging the spinal cord by inserting a needle there (since the cord is not present at that level of the vertebral canal).

The spinal cord is attached to the end of the dural sheath by a thin extension of the pia mater known as the filum terminale. The filum terminale also extends from the end of the dural sheath to attach the spinal cord to the tailbone. In both cases, the filum terminale helps to anchor the cord in place.

spinal cord in cross-section.

spinal cord in cross-section.

When you look at the spinal cord in cross-section (at any level) you will see what some describe as an H-shaped or a butterfly-shaped area of grey matter surrounded by white matter. The grey matter consists of cell bodies of motor and sensory neurons, and is divided into three regions. The area closest to the back of the spinal cord is called the posterior horn. This area consists of cell bodies of interneurons whose processes don't leave the spinal cord and neurons whose processes enter ascending tracts to carry sensory information up the cord. The substantia gelatinosa is an area of the posterior horn that is specialized to deal primarily with fibers carrying pain and temperature information.

The area of the grey matter closest to the front of the spinal cord is called the anterior horn. It contains the cell bodies of alpha motor neurons (aka lower motor neurons). These neurons leave the spinal cord in the ventral roots and project to skeletal muscle. They are responsible for all voluntary and involuntary movements.

The section of grey matter between the anterior and posterior horns is referred to as the intermediate grey matter. There is not a clear division between the anterior and posterior horns and the intermediate grey matter, so the intermediate grey matter contains some neurons that have characteristics similar to those found in each of the horns. It also contains a variety of interneurons involved in sensory and motor transmission. But the intermediate grey matter has unique functions as well, for it contains the cell bodies of autonomic neurons that are responsible for mediating involuntary processes in the body. These neurons are involved in internal organ functions that are not generally under conscious control, such as heart rate, respiration, digestion, etc.

The white matter that surrounds the grey matter is made up of bundles of ascending and descending fibers known as funiculi. Although the funiculi serve diverse functions, they are often grouped according to location into the: posterior funiculi, lateral funiculi, and anterior funiculi. Each of these funiculi are made up of a variety of ascending and descending tracts, but the funiculi are frequently associated with a small number of important, well-defined tracts whose fibers are carried within them.

For example, the posterior funiculi contain the posterior columns, important fibers that carry information regarding tactile (i.e. touch) sensations and proprioception to the brain. At the level of the medulla, these fibers form the medial lemniscus, another tract that continues to carry the information on to the thalamus and somatosensory cortex. The whole pathway (from the spinal cord to the somatosensory cortex) is often referred to as the posterior (or dorsal) columns-medial lemniscus system.

The lateral funiculi contain important fibers that carry pain and temperature sensations to the brain. These fibers (some of which enter the lateral funiculi from the substantia gelatinosa) are part of what is called the anterolateral system, which consists of several pathways that carry information regarding painful sensations to various sites in the brain and brainstem. The tracts that are part of the anterolateral system include: the spinothalamic tract, which is important for creating awareness of and identifying the location of painful stimuli; the spinomesencephalic tract, which is involved in inhibiting painful sensations; and the spinoreticular tract, which is involved in directing attention to painful stimuli.

The lateral funiculi also contain an important motor pathway: the corticospinal tract. The corticospinal tract fibers originate in the cerebral cortex (e.g. the precentral gyrus or primary motor cortex) and synapse on alpha motor neurons in the anterior horn. These alpha motor neurons then travel to skeletal muscle to initiate movement, and therefore the corticospinal tract plays an important role in voluntary movement.

The anterior funiculi aren't defined by a specific tract that travels through them. They contain a variety of ascending and descending tracts, including some fibers from the corticospinal tract.

Thus, the spinal cord acts as the intermediary between the brain and the body, and all sensory and motor signals pass through it before reaching their final destination. This is why a healthy spinal cord is crucial and damage to the spinal cord can be debilitating or life threatening.

To learn more about the spinal cord, check out this set of 2-Minute Neuroscience videos:

2-Minute Neuroscience: Exterior of the Spinal Cord

2-Minute Neuroscience: Spinal Cord Cross-section

 

 

New approaches to epilepsy treatment: optogenetics and DREADDs

Epilepsy refers to a group of disorders that are characterized by recurrent seizures. It is a relatively common neurological condition, and is considered the most common serious (implying that there is a risk of mortality) brain disorder, affecting around 2.2 million Americans.

The seizures associated with epilepsy are not homogenous; they can have a drastically different presentation depending on the patient, the part of the brain the seizure originates in, and how much of the brain the seizure affects. For example, seizures can involve clonic activity (i.e. jerking movements), tonic activity (i.e. rigid contraction of muscles), atonia (i.e. loss of muscle activity), or any combination of motor movements and/or loss of motor activity. On the other hand, they can simply be associated with a brief and subtle loss of consciousness, as in the case of an absence seizure.

One attribute that all seizures have in common, however, is excessive neural activity. Seizures are generally characterized by an increased rate of firing in a population of neurons and/or synchronous firing (i.e. neurons that are normally activated at disparate times are all firing together, leading to large spikes in neural activity) by a neuronal population. Because seizures involve an excessive level of neural activity, ictogenesis (i.e. the generation of seizures) has commonly been considered to involve either the direct excitation of neurons or the failure of a mechanism that inhibits the excitation of neurons.

Pharmacological treatments for epilepsy have been designed from this perspective, and have generally involved drugs that either decrease neural activation or increase neural inhibition. For example, drugs like carbamazepine and lamotrigine treat epilepsy by reducing activity at sodium channels in neurons, which makes neurons less likely to fire action potentials and leads to less overall neuronal activity. Other drugs, like phenobarbital and lorazepam, actively promote neural inhibition by increasing the stimulation of gamma-aminobutyric acid (GABA) receptors. GABA receptor activation typically makes neurons less likely to fire, which also reduces overall neural activity.

Pharmacological treatments for epilepsy, however, leave much to be desired. The side effects associated with them range from minor (e.g. fatigue) to serious (e.g. liver failure), and about 30% of epilepsy cases don't even respond to current pharmacological treatments. Surgical options (e.g. removing an area of brain tissue where seizures originate) can be considered in severe cases. Clearly, however, this is an irreversible treatment and also one that lacks specificity, which means that some potentially harmless (but important) brain tissue is likely to be removed along with areas from where seizures are emerging. Although surgical procedures can allow about 60-70% of patients to be seizure free within a year after the procedure, after 10 years more than half of patients begin to experience seizures again.

One of the major limitations to the current approaches to treating epilepsy is that they lack specificity. For, even if seizure activity can be traced back to excess neural excitation or deficiencies in neural inhibition, it is clear that these problems are not occurring all of the time because--except in rare cases--seizures are intermittent and represent only a small percentage of overall brain activity. Drugs that increase inhibition or reduce excitation, however, are having these effects continually (as is surgery, of course). Thus, the treatment of epilepsy is a rather crude approach that involves exerting a constant effect on the brain in the hopes of preventing a relatively rare event.

Because of this, efforts at designing new treatments for epilepsy have focused on more selective techniques. Although these approaches--which hypothetically involve targeting only neurons involved in ictogenesis--are still a long way from being used in humans, there is some promise associated with them. One method, optogenetics, targets seizure activity by incorporating light-sensitive proteins into neurons and then controlling their activity with the application of light. Another approach, designer receptors exclusively activated by designer drugs (DREADDs), focuses on ictogenesis by incorporating genetically engineered receptors  that only respond to a specific ligand into neurons and then controlling neuronal activity through the administration of that ligand.

Optogenetics for the treatment of epilepsy

Optogenetics is a field that combines insights from optics and genetics to manipulate the activity of neurons. It generally involves the use of gene therapy techniques to promote the expression of genes that encode for light sensitive proteins called opsins. In most cases, genes for opsins are carried into an organism after being incorporated into a virus' DNA (the virus in this case is known as a viral vector) or an animal is genetically engineered to express opsin genes in certain neurons from birth. Opsin expression can be targeted to specific cell types and the proteins can be used to create receptors or ion channels that are sensitive to light. When light is delivered to these neurons--either via an optical fiber inserted into the brain or with newer technologies that allow wireless external delivery of light--the opsins are activated. This allows exposure to light to act like an on-off switch for the neurons in which opsins are expressed, making them suitable for experimental or therapeutic manipulation.

One way optogenetics can be used as a treatment for epilepsy is by promoting the expression of a light-sensitive ion channel that, when activated, allows a flow of negatively charged chloride ions into the cell. This hyperpolarizes the neuron and makes it less likely to fire an action potential. Alternatively, opsin ion channels can be expressed on GABA neurons that, when activated, cause increased GABA activity and thus promote general neuronal inhibition. Both of these approaches lead to reduced activity of neuronal populations, potentially decreasing the excessive activity associated with seizures.

When linked to some method of seizure detection, optogenetics can be used to inhibit seizure activity at the first indication of its occurrence. This has already been achieved in experimental animals. For example, Krook-Magnuson et al. (2012) promoted either the expression of inhibitory channel opsins or opsins that activate GABA neurons in different groups of mice, then monitored seizure activity using electroencephalography (EEG) after the injection of a substance that promotes seizures. When seizure activity was detected on the EEG, it automatically triggered the application of light to activate the opsins. In both groups (inhibitory channel opsins and excitatory opsins on GABA neurons), light application rapidly stopped seizures.

Thus, when combined with seizure activity monitoring, optogenetics provides a way to selectively control neural excitation, cutting seizures off as soon as they begin. Optogenetics is still a relatively new field, however, and the work in this area has not yet translated into clinical approaches with humans. There are some significant hurdles to overcome before that can happen. One involves the need for a device that can non-invasively and effectively deliver light, another concerns the need for a non-stationary device that can monitor seizure activity. Advances in wireless light delivery, however, have already been made, and an implantable device to monitor seizure activity in humans was recently tested for the first time. Therefore, while this technology is not ready to be applied to epilepsy treatment in humans yet, its use is feasible in the not-so-distant future.

DREADDs for the treatment of epilepsy

Designer receptors exclusively activated by designer drugs, or DREADDs, are another approach that addresses the desire for specificity in epilepsy treatment. The use of DREADDs involves the manipulation of genes that encode for neurotransmitter receptors, then the forced expression of those mutated genes in an experimental animal. Receptors can be engineered so they no longer respond to their natural ligand, but instead only respond to a synthetic, exogenously administered drug. DREADD expression can be targeted to specific cell populations and, like the optogenetic methods discussed above, the receptors can be used to activate inhibitory neurons or inhibit excitatory neurons.

For example, Katzel et al. (2014) modified an inhibitory muscarinic acetylcholine receptor to no longer respond to acetylcholine but instead only to a synthetic ligand called clozapine-N-oxide (CNO); they then promoted the expression of this receptor in the motor cortices of rats. They administered a seizure-causing substance, then administered CNO, and found that CNO administration significantly reduced seizure activity.

Therefore, it seems that DREADDs also have potential for the targeted treatment of epilepsy. Because activation of DREADDs only requires taking a pill, it is considered less invasive than current optogenetic approaches. However, optogenetics possesses greater temporal specificity in that it can be activated immediately upon the onset of seizure activity and terminated just as quickly. Synthetic ligands for DREADDs, on the other hand, must be administered in advance of ictogenesis to ensure the drug is available to inhibit seizure activity when it begins, and will remain active in a patient's system until the drug is metabolized by the body.

Just as with optogenetics, though, there are some hurdles that must be overcome for DREADD use to translate into the clinical treatment of epilepsy. For example, individuals tend to vary considerably in how quickly they metabolize drugs. Thus, there might be some variation in the time span of protection offered by administration of a DREADD ligand, which in the case of potentially severe seizures could be dangerous. Also, although the ligands used for DREADD activation are chosen based on their selectivity for the designer receptor, a metabolite of CNO is clozapine, a commonly-used antipsychotic drug that also activates other receptors. In rodents, this did not translate into side effects, but the potential for metabolites of synthetic ligands to be biologically active must be considered when attempting to apply the technology to human populations. 

Optogenetics and DREADDs both represent intriguing approaches to treating epilepsy in the future. The intrigue stems primarily from their ability to only target certain cells, which is likely to reduce the occurrence of side effects. Regardless, even if these technologies aren't able to be used to treat humans for a long time--or ever--they still have a place in epilepsy research. For, the use of these tools also allows us more control over seizures in experimental animals, which makes a more thorough dissection of the seizure process possible. At the very least, this should provide more insight into a dangerous, yet relatively common, neurological disorder.

Krook-Magnuson, E., & Soltesz, I. (2015). Beyond the hammer and the scalpel: selective circuit control for the epilepsies Nature Neuroscience, 18 (3), 331-338 DOI: 10.1038/nn.3943

2-Minute Neuroscience: Directional Terms in Neuroscience

In this video, I cover directional terms in neuroscience. I discuss terms that are consistent throughout the nervous system: superior, inferior, anterior, posterior, medial and lateral. I also cover terms that change their meaning slightly depending on whether we are looking at the brain or spinal cord: dorsal, ventral, rostral, and caudal. Finally, I discuss three types of sections the brain is commonly examined in: sagittal, horizontal/transverse, and coronal/frontal.

Know your brain: Pineal gland

Where is the pineal gland?

Pineal gland (in red). Image courtesy of life science databases.

Pineal gland (in red). Image courtesy of life science databases.

The pineal gland is considered part of the epithalamus, which is one the main structures that makes up the diencephalon. The pineal gland was so named because it has a pine-cone like appearance. Unlike many structures in the brain, the pineal gland is unpaired; in other words, many brain structures like the hippocampus or amygdala are symmetrically paired with another copy of the organ in the other hemisphere of the brain. There is only one pineal gland, however, and it sits right on the midline of the brain.

What is the pineal gland and what does it do?

The solitary nature and unknown function of the pineal gland contributed to the French philosopher Renee Descartes calling it the "seat of the soul" and suggesting it was the place where the immaterial soul communicated with the physical body. Descartes' ideas about the pineal gland were never widely accepted by his contemporaries, however, and today the function most frequently associated with the pineal gland is the secretion of the hormone melatonin, which is involved in the regulation of circadian rhythms.

There are no neurons that leave the pineal gland to carry signals to other areas of the brain. Instead, the main output of the pineal gland--and the way it communicates with the rest of the nervous system--is melatonin. The pineal gland is made up primarily of secretory cells called pinealocytes, which secrete melatonin at varying rates throughout our 24-hour cycle. The highest rates of melatonin secretion occur in the middle of the night; they begin to decrease as it gets closer to dawn. This schedule of melatonin release is maintained based on information about the amount of light in the environment that the pineal gland receives from the retina. The retina sends this information to a nucleus in the hypothalamus called the suprachiasmatic nucleus (SCN), and from there it takes a convoluted path to the pineal gland.

In addition to sending information about ambient lighting to the pineal gland, the SCN also controls circadian rhythms. The SCN has receptors for melatonin, and it uses the melatonin signal to attain information about the time of day. Because melatonin levels are highest during the hours of darkness, the SCN can use melatonin activity as a sign that our circadian rhythm should be in its nocturnal stage. In this way, melatonin secretion can act as an important indicator if one's circadian rhythm is not in sync with the environment (e.g. if high levels of secretion are occurring but the person is still wide awake). This happens when, for example, someone has to adapt to a new 24-hour cycle after flying across several time zones. Exogenously administered melatonin, in fact, has been explored as a way of speeding up the process of adapting to a new sleep-wake cycle, with some success.

Just as melatonin secretion can provide information about the time of day, the nightly duration of melatonin secretion can provide information about the season of the year. Because longer periods of darkness occur in winter, the duration of melatonin secretion at night in the winter is slightly longer than it is in the summer. This is used as a signal in animals that are considered photoperiodic, meaning they experience biological and behavioral changes in response to the changing seasons. For example, many rodents suppress sexual activity during the winter months; it has been shown that removal of the pineal gland in rodents prevents this suppression from occurring. This suggests that melatonin secretion from the pineal gland serves as a sort of biological calendar in rodents, in the process helping to regulate their seasonal behavior. It is not clear that this function for melatonin has a great deal of relevance for humans, who are not considered photoperiodic. However, due to the onset of depressive symptoms during winter in those with seasonal affective disorder, abnormal melatonin secretion has been suspected as playing a role in the disorder, suggesting that it is within the realm of possibility that seasonal changes in melatonin secretion also affect human behavior.

Due to its close association with nighttime and circadian rhythms, melatonin has been investigated as playing a role in promoting sleep. Some have hypothesized that melatonin secretion may facilitate sleep by inhibiting activity in the SCN that promotes wakefulness. However, the true relationship between melatonin and sleep is unclear. In nocturnal animals, melatonin levels are still highest at night, suggesting a role for melatonin in circadian rhythms that does not necessarily involve sleep regulation. Many studies have investigated the effects of administering melatonin on sleep, and although there are some indications it may be effective in treating mild sleep disturbances, the results have been mixed (for example, see Ferracioli-Oda et al., 2013 and Buscemi et al., 2005).

Proper levels of melatonin secretion are important for human health, and the hormone is involved in a wide range of processes not discussed here. Perhaps because the pineal gland is highly specialized, focusing only on melatonin secretion, its importance is sometimes overlooked. The significance of melatonin in maintaining circadian rhythms, however, and the pineal gland's role in producing it, suggest that the pineal gland is an essential structure for the health of the central nervous system.

Dora Sapède,, & Elise Cau (2013). The Pineal Gland from Development to Function Current Topics in Developmental Biology DOI: 10.1016/B978-0-12-416021-7.00005-5

2-Minute Neuroscience: Spinal Cord Cross-section

In this video, I cover the spinal cord in cross-section. I discuss how the spinal cord is composed of grey and white matter. The grey matter is divided into 3 regions: the posterior horn, anterior horn, and intermediate grey matter. The white matter is divided into the posterior, anterior, and lateral funiculi. I describe all of these subdivisions and the functions they are primarily involved in.

Associating brain structure with function and the bias of more = better

It seems that, of all of the behavioral neuroscience findings that make their way into popular press coverage, those that involve structural changes to the brain are most likely to pique the interest of the public. Perhaps this is because we have a tendency to think of brain function as something that is flexible and constantly changing, and thus alterations in function do not seem as dramatic as alterations in structure, which give the impression of being more permanent.

After all, until relatively recently it was believed that we are born with a fixed number of neurons--and that was it. From the end of neural development through the rest of one's life it was thought that no new neurons were produced; thus, the inevitable occurrence of neuronal death ticked off a slow but inexorable decline into cognitive obscurity that we were helpless to prevent. We now know that this is not true, however, and that there are new neurons produced throughout one's lifespan in certain areas of the brain.

Regardless, perhaps that outdated thinking on the immutability of the brain causes people to be especially impressed by the mention of some activity changing the structure of the brain, because there is no shortage of articles in the popular press covering studies that involve changes to brain structure. Just since the beginning of 2015, there have been major news stories about methamphetamine, smoking, childhood neglect, and head trauma in professional fighters all leading to reductions in grey matter, as well a more positive story that music training in children can lead to increased grey matter in some areas of the cortex. And it seems like stories about meditation's ability to increase grey matter are constantly being recycled among blogs and popular news sites. In all of these stories it is either implied or stated without much supporting evidence that adding more grey matter to a part of the brain increases cognitive function, while losing grey matter decreases it. For example, in a story about smoking and its effects on the cortex, the reduction in grey matter was described in this way: "As the brain deteriorates, the neurons that once resided in each dying layer inevitably get subtracted from the overall total, impairing the organ’s function."

Of course, in neurodegenerative diseases like Alzheimer's disease, we know that loss of brain tissue corresponds to increasingly more severe deficits. But what do we know for sure about non-pathological changes in the structure/size of the brain, and how they are associated with function? Is it widely accepted that more grey matter is equivalent to improved function and less to diminished function? Contrary to what you might conclude if you read some recent news articles--and in many cases, the studies they summarize--on these subjects, we have not found a consistent formula for predicting how changes in structure will affect function. And so, it is not a universal rule that more is better when it comes to grey matter.

More brain mass = better brain function?

The hypothesis that larger brain size is associated with increased mental ability can be traced back at least to the ancient Greeks (possibly to the physician Erasistratus). It has had the support of a large share of the scientific community since the 1800s when some of the first formal experiments into the matter were conducted by Paul Broca. Broca recorded the weights of autopsied brains and found a positive association between education level and brain size. These findings would later be cited by Charles Darwin as he made a case for the large brain of humans as an evolved trait responsible for our superior intelligence compared to other species.

Indeed, the hominid brain has had a unique evolutionary trajectory. It is thought to have tripled in size about 1.5 million years ago, in the process creating a large disparity between our brains and those of our non-human primate relatives like the great apes. This rapid brain expansion, along with our highly developed cognitive abilities, has led many to suggest our brain growth was directly connected to increased capacities for intellect, language, and innovation. Humans don't have the biggest brains in the animal kingdom, however (that distinction belongs to sperm whales, which have brains that weigh about six times what ours do), but it is thought that the way the human brain grew--by adding more to cortical areas that are devoted to conceptual, rather than perceptual and motor processing--may have been responsible for our accelerated gains in intellectual ability.

Thus, while brain size has come to be considered an important indicator of the intelligence of humans in comparison to other species, it is thought that the cerebral cortex is really the defining feature of the human brain. The cortex makes up about 80% of our brain, a proportion that is much higher than that seen in many other mammalian species. The intricate folding and complex circuitry of the cerebral cortex may contribute to the greater intellectual capabilities we have compared to other species.

The advent of neuroimaging and its refinement over the last 15 years have allowed us to test in vivo the hypothesis that brain size among individuals corresponds to intelligence and, more specifically, that the thickness of the cerebral cortex is especially linked to cognitive abilities. Many of the results have supported these hypotheses. For example, a recent analysis of 28 neuroimaging studies found a significant average correlation of .40 between brain size and general mental ability. Another study of 6-18 year-olds in the United States found cortical thickness in several areas of the brain to be associated with higher scores on the Wechsler Abbreviated Scale of Intelligence. Choi et al. (2008) also found a positive correlation between cortical thickness and intelligence measures like Spearman's g; in addition they were able to explain 50% of the variance in IQ scores using estimates of cortical thickness in conjunction with functional magnetic resonance imaging data.

Recent studies have also allowed us to observe that brain size is not completely static, and that experience (as well as age) can alter the size or structure of certain parts of the brain. For example, one well-known study looked at the brains of London taxi drivers and found that taxi drivers had larger hippocampi compared to control subjects (hypothetically because the hippocampus is involved in spatial memory). The size of hippocampi was also correlated with time spent driving a taxi, suggesting it may have been the experience of navigating a cab through London that promoted the hippocampal changes, and not just that people with better spatial memory were more likely to become taxi drivers. In another study, participants underwent a brain scan, then spent a few months learning how to juggle, then had a second brain scan. The second scan showed increased grey matter in areas of the cortex associated with perceiving visual motion. When they stopped juggling, then had a third brain scan a few months later, the size of these areas had again decreased.

Based on such results, a number of studies have also looked at how certain activities might affect cortical thickness, with the assumption that increased cortical thickness represents an enhancement of ability. For example, an influential study published in 2005 by Lazar et al. examined the brains of 20 regular meditators (with an average of about 9 years of meditation experience) and compared the thickness of their cortices to that of people with no meditation experience. The investigators found that the meditators had increased grey matter in areas like the prefrontal cortex and insula; the authors interpreted this as being indicative of enhanced attentional abilities and a reduction of the effects of aging on the cortex, among other things.

Problems with the increased mass = improved function hypothesis

But there are a few problems with conclusions like those made in the meditation study conducted by Lazar et al. The first is that in the Lazar et al. study--and in many studies that look at brain structure and associate it with function--brain structure was only assessed at one point in time. The issue with only looking at brain structure once is that it doesn't allow one to determine if the structure changed before or after the experience. In other words, in the Lazar et al. study perhaps increased cortical thickness in prefrontal areas was associated with personality traits that made individuals more likely to enjoy meditating, instead of meditating causing the structure of the cerebral cortex to change.

Regardless, even if we knew the behavior (e.g. meditation) came before the structural changes, it still would be unclear how the structural modifications translated into changes in function. Postulating, as Lazar et al.did, that the changes may have been associated with an increased ability to sustain attention or be self-aware is a clear example of confirmation bias, as the authors are interpreting results in such a way as to support their hypothesis when truly they lack the evidence to do so. However, even if we agreed with the specious reasoning of Lazar et al. that "the right hemisphere is essential for sustaining attention" and thus structural changes to it are especially likely to affect attention, it is far from conclusive that increased cortical thickness in general represents increased function.

In fact, there are myriad examples in the literature that suggest there is not always a positive correlation between cortical thickness and cognitive ability. For example, a study involving patients with untreated major depressive disorder found they had increased cortical thickness in several areas of the brain. Another study detected increased cortical thickness in frontal, temporal, and parietal lobes in children and adolescents born with fetal alcohol spectrum disorder. A 2011 investigation involving binge drinkers observed thicker cortices in female binge drinkers, and thinner cortices in male binge drinkers. And a report examining the effects of education level on cortical thickness found those with higher education levels actually had thinner cortices. This is just a small sampling of the many studies indexed on PubMED that do not support the increased mass = increased function hypothesis. The reasons for this lack of consistent support could be numerous, ranging from confounds to measurement problems. But the important point is that there is no axiom that increased grey matter will result in improved functionality.

Of course this doesn't mean there is never an association between brain volume or cortical thickness with brain function. We should, however, be interpreting studies that identify such associations with caution. And we should be even more wary of popular press articles that summarize such studies, for when these studies are reported on by the media, the complexity of the association between structure and function--as well as the cross-sectional limitations of many of the studies--are rarely taken into consideration.

There are some methodological approaches to these types of studies that would allow investigators to improve their ability to make confident interpretations of the results. First, an emphasis should be placed on using longitudinal designs. In other words, an initial brain scan should be conducted first to get a baseline measure of brain structure. Then the activity (e.g. meditation) should be performed for a period of time before brain structure is assessed at least one more time. This reduces the weight that must be given to the concern that changes in structure predated changes in function. Additionally, it is best for some measure of function to be taken along with an assessment of brain structure to support any suggestion that the structural changes correspond to a functional change.

Many studies are already starting to utilize such approaches. For example, a recent study by Santarnecchi et al. examined the effects of mindfulness meditation on brain structure. The researchers conducted brain scans before and after an 8-week Mindfulness Based Stress Reduction (MBSR) program, using participants who had never meditated before. Participants also underwent psychological evaluations before and after the program. The participants experienced a reduction in anxiety, depression, and worry over the course of the MBSR course. But they also showed an increase in cortical thickness--in the right insula and somatosensory cortex. The Lazar et al. study noted an increase in volume of the right insula as well, suggesting there may be something to this finding. And because of the longitudinal design of the study by Santarnecchi et al., along with the fact that the participants were not regular meditators, we can have more confidence that the structural changes seen came as a result of--instead of predated--the meditation practice.

As can be seen from this example where, despite the methodological problems of the Lazar et al. study, some of the results were later supported, a too liberal interpretation of such results does not suggest the findings are baseless. If we are going to have confidence in the value of associations between structure and function, however, more emphasis must be placed on using study designs that allow one to infer causality. And, when it comes to the interpretation of results, a conservative approach should be advocated.

Lazar, S., Kerr, C., Wasserman, R., Gray, J., Greve, D., Treadway, M., McGarvey, M., Quinn, B., Dusek, J., Benson, H., Rauch, S., Moore, C., & Fischl, B. (2005). Meditation experience is associated with increased cortical thickness NeuroReport, 16 (17), 1893-1897 DOI: 10.1097/01.wnr.0000186598.66243.19

2-Minute Neuroscience: Exterior of the Spinal Cord

In this video, I cover the exterior of the spinal cord. I discuss how sensory and motor information is carried to and from the cord via dorsal and ventral roots, respectively. I also explain how these roots come together to form spinal nerves, and how a pair of spinal nerves leaves at each segment of the cord; these segments are classified as belonging to cervical, thoracic, lumbar, sacral, or coccygeal regions. Then I describe some major landmarks of the cord, including the conus medullaris, cauda equina, lumbar cistern, and filum terminale.

Know your brain: Striatum

Where is the striatum?

Striatum (in red). The c-shaped portion of the structure is the caudate and the more globular portion is the putamen. image courtesy of life science databases.

Striatum (in red). The c-shaped portion of the structure is the caudate and the more globular portion is the putamen. image courtesy of life science databases.

The striatum refers to a small group of contiguous subcortical structures: the caudate, putamen, and nucleus accumbens. The caudate and putamen are separated from one another by a white matter tract called the internal capsule, but there are many strands of grey matter that cross the internal capsule between the two structures. The white matter of the internal capsule overlaid with these grey matter "bridges" creates a striped appearance, which is why this area has come to be called the striatum (Latin for striped). The striatum is sometimes conceptualized as being divided into dorsal and ventral sections; the dorsal striatum contains the caudate and putamen, while the ventral striatum contains the nucleus accumbens.

What is the striatum and what does it do?

The striatum is one of the principal components of the basal ganglia, a group of nuclei that have a variety of functions but are best known for their role in facilitating voluntary movement. The basal ganglia receive information about a desired goal from the cerebral cortex; they help to achieve that goal by selecting the appropriate action for it and initiating movement while at the same time ensuring that oppositional movements are inhibited. The result is smooth, fluid movement. We can see the importance of the basal ganglia in movement by looking at the overt symptoms of someone with Parkinson's disease. These symptoms involve slow movement, tremors, and rigidity, and their severity is associated with the neurodegeneration of basal ganglia nuclei and their connecting pathways.

The striatum (primarily the dorsal striatum) is one of the main input areas for the basal ganglia. It receives the bulk of its incoming fibers from the cerebral cortex, but it also receives afferent fibers from the substantia nigra and thalamus. The fibers from the cerebral cortex (i.e. corticostriatal fibers) often carry information about motor plans; these plans are then modified and sent back to the cortex to be put into action. However, it should be noted that the fibers that travel to the striatum from the cortex are not only movement-related. Indeed, the striatum (and more generally the basal ganglia) is thought to be involved in many aspects of cortical function (and thus many aspects of cognition), and so it receives input not just from motor areas but also from areas throughout the cortex. The afferents from the substantia nigra, collectively known as the nigrostriatal pathway, seem to play an especially important role in movement as they are severely affected by neurodegeneration in patients suffering from Parkinson's disease. The role of the fibers from the thalamus, known as thalamostriate fibers, is not very well understood in humans.

Fibers that leave the striatum mostly travel to the main output nuclei of the basal ganglia: the globus pallidus and substantia nigra. From there, the fibers extend to the thalamus and other areas; projections from the thalamus carry the information back to the cortex.

The ventral striatum contains the nucleus accumbens, a nucleus that has been extensively studied for its role in rewarding experiences. The nucleus accumbens--and the ventral striatum as a whole--is associated with reward, reinforcement, and the progression from just experiencing something rewarding to compulsively seeking it out as part of an addiction. Thus, the ventral striatum is activated when we do--or even just anticipate doing--something we know will be pleasurable.

The afferent projections to the ventral striatum come largely from the same places as those of the dorsal striatum (although the ventral striatum seems to get more input from the amygdala and hippocampus). But the involvement of the ventral striatum in reward is most often associated with fibers that travel to the nucleus accumbens from the ventral tegmental area (VTA), a dopamine-rich area in the midbrain. This pathway that travels from the VTA to the nucleus accumbens is called the mesolimbic dopamine pathway. It is activated during rewarding experiences (e.g. during the use of addictive drugs) and therefore is considered a primary component of the reward system.

Thus, the striatum is most frequently associated with movement and mediating rewarding experiences. As noted above, however, the striatum is thought to be involved in diverse aspects of cognition and behavior. So, while movement, reward, and motivation may be the most studied of the functions associated with the striatum, they are by no means the extent of them.

Further reading: Know your brain: Basal ganglia

Know your brain: Reward system