Monday, April 28, 2008

Encephalon #44 at Cognitive Daily

The 44th edition of Encephalon, a first-rate brain-blogging carnival, is up at your favorite cognitive psychology website, Cognitive Daily. There are plenty of posts in this edition, a boon for the neuroscientifically-inclined reader.

Thursday, April 24, 2008

Neuroimaging and the Social Ladder

Social hierarchies, and the corresponding struggles to move up within them, are ubiquitous throughout the animal kingdom. It is common to observe the attainment of dominant, as well as the relegation to submissive, roles in animal groups. As is so often the case, when we turn our attention to our own species, however, we rarely describe ourselves in such ethological terms as dominant and submissive. To do so causes one to draw an uncomfortably amorphous line between the human and nonhuman kingdom, one that many of us avoid as it has a tendency to tarnish the uniqueness of the human condition, allowing for the propagation of the more comfortable idea that we are separate from “lower” forms of animal life.

But social rank exists, and is as evident in human societies as it is in any other. It affects every aspect of our lives, including our health. A famous study of British civil servants found an inverse correlation between social status and cardiovascular and mental health. This corresponds with numerous animal studies that have demonstrated the detrimental health effects lower social status can result in.

The concept of social hierarchy seems so universal as to suggest it may be an innate behavior, caused by neural architecture evolved specifically to regulate it. A group of researchers at the National Institute of Mental Health (NIMH) investigated this recently using neuroimaging techniques. They were hoping to find areas of the human brain that are activated specifically when assessing social rank, either of oneself or others.

To do so they used an interactive computer game that participants played for a monetary prize while their brains were scanned with functional MRI(fMRI). Throughout the game, a participant would intermittently see the pictures and scores of other players, who they thought were playing simultaneously in other rooms. In reality, the subject being scanned was the only player.

The researchers found a number of brain areas that were activated according to whether the participant felt she was succeeding or failing compared to her imaginary competitors. The reward area of the brain, specifically the ventral striatum, was activated just as highly in response to a rise in comparative ranking among other players as it was to a monetary reward itself, underscoring the importance the participants’ placed on social position. When the participants did worse than a player with an inferior ranking, areas of the brain correlated with emotional frustration were activated more strongly than when they were beat out by an equal, or superior, player. Specific areas of the brain were also activated just in the assessment of other players as they appeared on the screen, before negative or positive results of the game had been achieved. This may involve something of a “sizing-up” process, used to assess potential competition. Additionally, more competitive players experienced increased reward stimulation when they won, but also more emotional pain when they lost to an inferior player.

This supports the concept that our brains are designed to struggle for social dominance, even in a society where pure dominance is relatively rare. It certainly makes sense, however, considering the competitive drives that lie in many of us, and the desperation one can experience when one feels humiliated or reduced in stature. While our competition may be much more subtle than the violent dominance battles of bears, primates, or elephant seals, it still exists, and in a much more palpable way than many care to realize.

Monday, April 21, 2008

The Serotonin Hypothesis and Neurogenesis

ResearchBlogging.org
Although it has become a commonly accepted explanation for the neurobiology of depression among the general public (mostly due to misleading advertisements by pharmaceutical companies), the idea that depression is caused primarily by a serotonin “imbalance” is a description of the processes underlying the disorder so simplified it renders itself inaccurate. The serotonin hypothesis, proposed decades ago, gained increased support of late due to the efficacy of selective serotonin reuptake inhibitors (SSRIs) in treating depression. SSRIs increase extracellular levels of serotonin by blocking its reabsorption from the synaptic cleft back into the cell. The effectiveness of SSRIs led many to draw a one to one correlation between depression and low serotonin levels.

The truth is, however, that the reason why SSRIs work is not very well understood. Experimental attempts at isolating serotonin as the primary factor in depression have been unsuccessful. Regardless, many drug companies latched on to the serotonin hypothesis in promoting their antidepressants because it was simple for the consumer to understand.

The broader picture of the neurobiology of depression is much more complex. It begins with the proper mixture of genetic predisposition and environmental stressors. When that mixture is unfortunately encountered, higher than normal levels of the stress hormone cortisol may be released. These high cortisol levels are thought to cause neuronal damage, especially in the hippocampus. Glucocorticoid receptors (GR), which are receptors for cortisol (a type of glucocorticoid), may become desensitized, resulting in a persistent stress response, causing further neuroendocrine disruption.

This overactivity of the neuroendocrine system, specifically the hypothalamic-pituitary-adrenal (HPA) axis, can cause the release of cytokines. Cytokines are part of an immune response, but in this case they may end up causing further instability in the neuroendocrine system along with concomitant fatigue, loss of appetite, hypersensitivity to pain, and reduced libido.

Levels of a neurotrophin, called brain-derived neurotrophic factor (BDNF), also appear to be affected by all this upset in the HPA and hippocampus. Neurotrophins are proteins that are dedicated to neural support, and BDNF is integral to neurogenesis, plasticity, cell maintenance, and growth. Reduced levels of BDNF have been found to be strongly correlated with depression in human patients. These diminished BDNF levels may be responsible for the hippocampal damage mentioned above, as BDNF is the primary neurotrophin of the hippocampus.

Now enter serotonin. Serotonin is a type of neurotransmitter known as a monoamine, as are dopamine and norepinephrine. After they are released into the synaptic cleft, monoamines are quickly broken down by an enzyme called monoamine oxidase A (MAO-A). Then their constituents are taken back up into the neuron and recycled for future use.

MAO-A activity appears to be increased during depression. Thus, monoamines are metabolized more quickly, allowing less of them to reach their receptors on the other side of the synaptic cleft. Lower monoamine levels can negatively affect mood, cognition, motivation, and sensation of pain. It is thought that higher levels of cortisol may increase MAO-A levels, intensifying the symptoms of depression, and providing the explanation for why SSRIs can alleviate them.

Even this is a simplified illustration of the processes of depression, making the “chemical imbalance” described in drug commercials more similar to a children’s book than a thorough explanation. And a full understanding of the process has not yet been reached. An article in this week’s Science leads us closer to that understanding, however, by examining the effects of fluoxetine on neuronal plasticity in the visual cortex of the rat.

The neurogenesis and increases in BDNF caused by antidepressants appear to be as closely, if not more so, correlated with their effectiveness as changes in the neurotransmission of monoamines like serotonin. In order to further elucidate exactly how these processes work, and specifically if they cause beneficial modifications of neuronal circuitry in the brain, the authors of the report in Science investigated whether fluoxetine (Prozac) could cause plastic changes in the visual cortex that would improve vision in an impaired rat's eye.

They found that the neurogenesis promoted by fluoxetine was significant enough to restore vision in the rat's eye if the other eye was covered (covering the strong eye causes the cortex to focus on making improvements to the visual system through modifying the connections for the weaker eye). They had set out to investigate antidepressant-induced brain plasticity and neurogenesis, and in the process found that antidepressants may also be effective in the treatment of amblyopia (often referred to as lazy eye). Additionally, they provided further support for the idea that neurogenesis and the regulation of BDNF is just as important in treating depression as monoamine levels—and probably the more essential factor. The serotonin hypothesis is a useful way for drug companies to easily explain depression, but it seems to be far from the whole truth.

Reference:

Vetencourt, J.F., Sale, A., Viegi, A., Baroncelli, L., De Pasquale, R., F. O'Leary, O., Castren, E., Maffei, L. (2008). The Antidepressant Fluoxetine Restores Plasticity in the Adult Visual Cortex. Science, 320(5874), 385-388. DOI: 10.1126/science.1150516

Saturday, April 19, 2008

Using Gene Therapy to Treat Addiction

When you take into consideration the number of ways in which it can manifest itself, addiction is probably the most prevalent mental disorder that we don’t yet have a pharmaceutical treatment for. By identifying common neurobiological substrates that underlie all types of addiction, however, scientists hope to find drug targets that may one day allow it to be treated at least as methodically as other widespread disorders like depression. One of these commonalities found in the brains of addicted subjects involves a reduction of the number of available dopamine receptors in reward areas of the brain. Specifically, the density of a receptor called the D2 receptor is found to be decreased.

This diminished D2 receptor density can lead to addiction in two ways. A normal level of D2 receptors in an area of the brain called the nucleus accumbens is thought to play an important role in regulating impulsivity, and thus in avoiding an initial exposure to large amounts of drugs (or any addictive substance). Low levels may predispose drug users not only to trying drugs, but also to using them in amounts large enough to lead to addiction.

When a drug is used, it increases dopamine transmission. Over a period of time, this increase in dopamine activity in the brain can cause D2 receptors throughout the reward system to become depleted. This occurs due to a process called downregulation, which is a natural response of the body to decrease the number of receptors for a substance if the substance seems to be available in excessive amounts. Downregulation can lead to a disruption of the reward system that causes an addict to have difficulty finding pleasure in “normal” activities. The drug is the only substance that can facilitate a sense of pleasure, and compulsive use becomes difficult to avoid.

This understanding of the role of D2 receptors in addiction led researchers at the U.S. Department of Energy’s Brookhaven National Laboratory to search for a way to increase the number of D2 receptors in an addicted brain, in the hopes that doing so could reverse the process of addiction. They found gene therapy to be a successful way to do it.

First, the scientists trained a group of rats to self-administer cocaine (not a difficult thing to do, as most organisms tend to like cocaine). They then injected an innocuous virus that carried the gene for D2 receptor production into the rats’ brains. It was hoped the virus would insert the D2 receptor gene into the cells of the rats’ reward system, leading to an increase in D2 production, and a consequent reduction in addictive behavior.

After the gene therapy treatment, the rats showed a 75 percent decrease in cocaine self-administration. The change lasted for about six days before the rats returned to their previous usage. The group at Brookhaven has conducted a comparable experiment in the past with alcohol-addicted rats, and found similar results.

D2 receptor density has been shown to play a role in predisposition to addiction, as well as in the propagation of addictive behavior after its onset. With experimental results like those seen at Brookhaven, it appears manipulating D2 receptor density may have potentially beneficial effects in treating addiction. Unfortunately, given gene therapy’s checkered past (see here), it will be quite some time before such methods become available for use with humans. Primate studies, however, will most likely be the next step, and if all goes well perhaps gene therapy may one day end up being able to treat what could arguably be called the greatest behavioral scourge of our species: the repetitive pursuit of rewards long after they’ve lost their rewarding value, a.k.a. addiction.

Tuesday, April 15, 2008

Who's the Decider?

It’s too bad Benjamin Libet didn’t live another year. If he did he would have been able to see the first neuroimaging evidence to support what he found with an electroencephalogram (EEG) almost thirty years ago: what we consider our conscious decisions are preceded by unconscious neural activity, which seems to be the actual—as President Bush would say—decider.

Libet conducted his most influential experiment involving neural activity and conscious decision-making at the University of California, San Francisco, in 1979. While he measured their brain activity with EEG, he asked participants to carry out a simple motor task (like pressing a button) at their own volition during a period of time. How many times and when they completed the task was up to the participant, but Libet asked them to note exactly when they felt they had made the conscious decision to make the movement. He found there was significant stereotypical neural activity that preceded the conscious decision-making, indicating there may be unconscious processes at work in choosing to execute a motor task.

Libet’s study sparked a great deal of controversy, as some saw it as a denunciation of free will. And rightly so, as Libet himself suggested the only evidence in support of free will is our own assertion that it exists. His experiments appeared to show unconscious brain activity that preceded conscious choice, making the entire concept of “conscious choice” questionable.

A study published in Nature Neuroscience this week adds more ammo to the materialist’s belt. A group of researchers in Germany conducted a study very similar to Libet’s, but used functional MRI (fMRI) to measure brain activity instead of EEG. They analyzed the images with computer software developed to recognize specific patterns of neural activity, in this case those that anticipated the participants’ decisions to press the button.

Not only did the group find neural activity that preceded conscious choice, but using the computer programs they were able to predict what choice the participant would make—up to 7 seconds before they “decided” they had made it. The predictions were not perfect, but much better than chance.

The researchers assert that this study doesn’t exclude the existence of free will. Even Libet maintained that there was a role for consciousness in decision-making, not in initiating an act, but in the ability to suppress it. The authors of this study agree, indicating that the capacity to reverse a decision made by the unconscious brain—something they plan to investigate in the future—would support a type of free will.

Libet died in July of last year, unfortunately less than a year before current brain scanning technology caught up with his theories of the late 1970s.

Wednesday, April 9, 2008

Epigenetics and Alcoholism

Many of us, even those without alcohol problems, may feel more inclined to have a drink after a bad day, when stress is building up, or when we are trying to take our minds off of something that’s bothering us. This is one of the reasons alcohol is so popular: it has the ability to relieve anxiety and stress, at least while it’s being served (the next morning is another story). It’s also, however, one of the reasons alcoholism is so insidious. For an alcoholic, periods of alcohol withdrawal involve severe anxiety. When in the throes of this angst, it is extremely difficult for an alcoholic to avoid returning to what their brain has identified as the most efficient stress-reducer within their reach. A recent report in The Journal of Neuroscience indicates that this withdrawal anxiety may be due to changes in gene expression.

Previous research has pointed to the importance of a neuropeptide transmitter called neuropeptide Y (NPY) in managing anxiety, and in modulating alcohol consumption. Low levels of NPY, specifically in the amygdala, have been found in animals that have a preference for alcohol. Additionally, knockout mice who are engineered to lack NPY receptors exhibit an increased proclivity for alcohol.

The researchers involved in the current study wanted to determine how fluctuations in NPY occur. They found that NPY transmission in rats is influenced by transient changes in gene expression. These changes, known in biology as epigenetic processes (epi- meaning “in addition to”), involve chemical modifications of DNA that can alter gene expression, but don’t affect the actual DNA sequence of the organism. Thus, the gene expression is somewhat temporary (in that the DNA is not permanently altered), although how long it actually lasts depends on the specifics of the process.

DNA is wound around proteins called histones, and how tightly they are knitted together can affect gene expression epigenetically. There are enzymes that can loosen how tightly they are wrapped up, called histone acetyltransferases (HATs), and those that can tighten the packing, known as histone deacetylases (HDACs). HATs generally promote gene expression, while HDACs inhibit it.

The research team in this study found that exposure to alcohol decreased the activity of the gene inhibitors (HDACs) in rats’ amygdalas, leading to increased gene expression. This expression resulted in higher levels of NPY, and the corresponding low anxiety levels that alcohol is known for. Withdrawal, however, increased HDAC activity, reducing NPY levels, and causing a significant increase in anxiety behavior. When the group administered a drug that blocks HDAC activity, they were able to prevent observable anxiety from occurring during withdrawal.

This suggests that a possible future treatment for alcohol withdrawal could involve pharmaceuticals that inhibit HDACs. Removing the consuming anxiety caused by withdrawal would be a potent tool in the treatment of the disorder. And, studying epigenetic processes may be a fruitful method of finding treatments for addiction in general, as transient changes in brain function (which could be due to gene expression) seem to be involved in many cases.

Monday, April 7, 2008

Trying to Make Evolutionary Sense of Menopause

ResearchBlogging.org
This is a bit of a deviation from neuroscience (although neuroscience and evolution are fundamentally related), but I stumbled upon an article in PNAS about menopause that I found interesting, and I wanted to comment on it. I never really thought much about the evolution of menopause, and now that I have, it is a very unusual biological process (as well as a very unpleasant one for women). I don’t know if Darwin ever considered menopause in reference to his theories. If any readers know of any instances where he did, I’d be grateful if you could give me some page numbers or quotes.

According to evolutionary theory, the goal of any organism is to procreate—to pass on its genes. Thus, it has always been an enigma to evolutionary theorists why human females live so long after they have lost the ability to reproduce. They are the only ones in the primate family that have a long postreproductive life. It is confusing in and of itself that they lose the ability to reproduce during their lifetime, as it is quite rare in the rest of the animal kingdom.

This anomaly has caused some to suggest that menopause is not an adaptive trait, but a byproduct of the medical advancements we’ve made that have resulted in an extended human lifespan. Proponents of this argument assert that women now simply outlive their supply of egg follicles. This hypothesis is contradicted, however, by the fact that even in contemporary hunter-gatherer societies that have no access to modern medicine, women still experience menopause and live into their sixties.

Thus, evolutionary biologists have continued to try to develop an evolutionary explanation for menopause and a long postreproductive lifespan. In order to do so, it is necessary to figure out why these things may have conferred an adaptive advantage to our ancestors. This has given evolutionary theorists fits.

The first reasonable explanation was espoused by Dr. George C. Williams in 1957. He hypothesized that menopause is adaptive because it keeps older women from being exposed to the risks associated with childbirth (which were much higher for our ancestors). This allows them to remain alive long enough to ensure their children are raised to maturity to have their grandchildren (thus continuing the original mother’s gene line). This became known as the grandmother hypothesis.

Dr. Kristin Hawkes and colleagues elaborated on this hypothesis in 1997, when they studied a contemporary hunter-gatherer society in Tanzania called the Hadza. Hadza grandmothers are among the most assiduous workers in the society. They spend up to eight hours a day gathering food, which they bring home to feed their grandchildren. When Dr. Hawkes’ group saw how important the role of the grandmother was in Hadza society, they suggested that a long postreproductive life allowed those women to focus on the health of their grandchildren. This ability to provide for grandchildren and encourage the continuation of their genetic heritage, Hawke asserted, could have led to natural selection for menopause and living long afterwards.

Evolutionary biologists, however, were still not satisfied with this hypothesis. In order for menopause to outweigh the advantage of a continued ability to reproduce, a postmenopausal woman’s children would have to have twice as many children themselves. Infant mortality among those grandchildren would also have to be virtually nonexistent. Thus, when one crunches the numbers, the grandmother hypothesis doesn’t seem to quite add up. Certainly grandmotherly assistance in raising a daughter’s children is adaptive, but a quantitative analysis can’t justify the loss of childbearing ability to begin with.

Researchers at the Universities of Cambridge and Exeter published a paper last week in PNAS that they hope may help to resolve the debate over the evolutionary origins of menopause. They suggest that previous models have focused on personal-fitness and kin-selected fitness, or in other words, the health risks of reproduction for an older woman and the assistance she provides to the survival of her kin (like seen in the Hadza). What is ignored, they assert, is competition—namely reproductive competition between new females introduced into a group and older females who already have offspring in the group.

Their model is based on what is called “female-biased dispersal”, which simply means that in the hunter-gathering times of our species, females were more prone to move between social groups than men. A female newcomer in a group would have no genetic ties to anyone else in the group, and would have to compete with other women in the group for chances to reproduce. Older women who already had children could still continue their genetic heritage through grandchildren, instead of having more children themselves. Thus, the newcomer female would win the competition because procreating with a male in the group was her only reproductive option, while the older female had the option of gaining a grandchild and helping it to survive—an option that results in a greater chance of promoting her genetic heritage than procreating herself.

As evidence for this model, the authors present data that shows the reproductive overlap in humans is extremely low when compared to other primates. On average, women have their first baby at age 19, and their last at age 38—exactly when their first-born has reached normal breeding age. They also state that the rate of attrition of the initial oocyte stock in human females should allow for reproduction up to an age of about 70. At around age 38, however, there is an increased rate of ovarian follicular hazard. By age 50 (the average start of menopause), follicle stocks have dropped below reproductive levels. Thus, according to the researchers, at the age when they can begin to expect reproductive competition from the next generation of females, reproductive senescence is accelerated.

The authors suggest this hypothesis as a complement to, not a replacement for, the grandmother hypothesis. While the assistance a grandmother provides in caring for her grandchildren has an adaptive quality, they argue it cannot explain a female’s loss of reproductive ability. A combination of both theories, they state, best explains menopause and a long postreproductive lifespan.

While this is certainly a feasible hypothesis, it is far from the last word on the topic. In order for this theory to develop a stronger foundation, more studies must be conducted on other species that have cooperative breeding societies. It will be important to find if the reproductive overlap in these groups is similar to that seen in humans. And, although the reproductive overlap data is intriguing, menopause still seems to me to be an awfully complex mechanism to have evolved to reduce reproductive competition. Of course, evolution has resulted in many other inexplicably complex mechanisms.

If I were a woman I would be a little disgusted with evolution for forcing me to go through all of these uncomfortable processes like monthly cycles, pregnancy, and menopause. If evolution were fair, men would have to deal with at least one of these burdens. On the other hand, as a man I’m glad evolution isn’t fair—sorry!

Reference:

Cant, M.A., Johnstone, R.A. (2008). Reproductive conflict and the separation of reproductive generations in humans. Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0711911105

Wednesday, April 2, 2008

A Triumph for Stem Cell Research

Although the potential applications of stem cell therapy are numerous, right now some of its most promising conceivable uses are in the treatment of degenerative brain disorders, such as Alzheimer’s disease (AD) or Parkinson’s disease (PD). In both of these afflictions, essential brain regions deteriorate, leading to notoriously debilitating symptoms. In AD, cholinergic neurons are depleted, while a loss of dopaminergic neurons is responsible for the effects of PD. Some scientists see disorders like these as ideal cases for stem cell treatment. For, in theory, if cholinergic or dopaminergic neurons are deteriorating, one could implant stem cells into the brain of the patient that could then be prodded to form new neurons. This could offset the atrophy caused by the disease.

Thus far, attempts to do this in laboratory animals have had mixed results. Sometimes a slight improvement can be seen, lending credence to the potential validity of the procedure. But failure for a diseased animal to get better after stem cell injection is more common, indicating there are problems with the technique. It is thought that those problems may have to do with the genetic compatibility of the stem cells being implanted into the subject’s brain and the subject's immune system. Perhaps the immune system is recognizing the stem cells as foreign and initiating a response to destroy them. This could be responsible for the animal’s lack of improvement.

Any scientists in training might want to pause for a moment before reading on and try to think about a logical solution to this problem. If a mouse’s immune system is rejecting stem cells from another mouse, what is a way to get around this?

The answer is: use stem cells genetically identical to the subject mouse’s cells. A group of researchers at Memorial Sloan-Kettering Cancer Center induced PD in mice by injecting them with a toxin. They then took skin cells from the tails of the mice and did a little DNA switcheroo. They took the DNA out of the skin cells and transferred it into mouse egg cells that had already had their own DNA extracted from them. The group then prodded the egg cells to divide, eventually producing stem cells as a part of normal embryonic development. The researchers added the appropriate growth factors to the stem cells to cause them to differentiate into brain cells.

They then injected the newly formed brain cells into the PD mice. The immune systems of the mice recognized the brain cells as “self”, since they were genetically identical. Thus, no immune response was mounted, and the mice showed significant neurological improvement. Out of about 100,000 genetically similar brain cells injected into each PD mouse, approximately 20,000 cells survived to function in each brain. Of course the study also had a group for comparison that received genetically dissimilar cells, and these mice did not get healthier. Only a few hundred of the genetically different cells survived in the brains of the mice in this group (of the same number injected).

This is what stem cell researchers have been waiting for: the use of somatic cell nuclear transfer (cloning) technology to make replacement cells for the body, resulting in clear evidence that it can lead to significant recovery from degenerative disorders. It is vindication for those who have been proclaiming the limitless possibilities of stem cells.

It, however, does not get around the fundamental problem stem cell advocates face. It requires the use of embryonic cells to create the stem cells. Thus, despite the potential it has, it will continue to face harsh political opposition. One has to hope, however, that as these procedures become perfected in organisms like mice, opponents will have to adopt a more utilitarian perspective. Perhaps using some of the half a million frozen embryos that are collecting dust in the in vitro fertilization clinics across the country would be considered a little more in depth if their ability to alleviate the suffering of living PD sufferers, which number over a million in the U.S. alone, had been demonstrated repeatedly in animal studies.

For other posts from me on stem cells, go here or here.