Tuesday, April 22, 2014

Autism, SSRIs, and Epidemiology 101

I can understand the eagerness with which science writers jump on stories that deal with new findings about autism spectrum disorders (ASDs). After all, the mystery surrounding the rapid increase in ASD rates over the past 20 years (see right) has made any ASD-related study that may offer some clues inherently interesting. Because people are anxiously awaiting some explanation of this medical enigma, it seems like science writers almost have an obligation to discuss new findings concerning the causes of ASD.

The problem with many epidemiological studies involving ASD, however, is that we are still grasping at straws. There seem to be some environmental influences on ASD, but the nature of those influences is, at this point, very unclear. This lack of clarity means that the study of nearly any environmental risk factor starts out having potential legitimacy. And I don't mean that as a criticism of these studies--it's just where we're at in our understanding of the rise in ASD rates. After we account for mundane factors like increases in diagnosis due simply to greater awareness of the disorder, there's a lot left to figure out.

So, with all this in mind, it's understandable (at least in theory) to me why a study published last week in the journal Pediatrics became international news. The study looked at a sample of children that included healthy individuals along with those who had been diagnosed with ASD or another disorder involving delayed development. They asked the mothers of these children about their use of selective serotonin reuptake inhibitors (SSRIs) during pregnancy. 1 in 10 Americans is currently taking an antidepressant, and SSRIs are the most-frequently prescribed type of antidepressant. Thus, SSRIs are administered daily by a significant portion of the population.

Before I tell you what the results of the study were, let me tell you why we should be somewhat cautious in interpreting them. This study is what is known as a case-control study. In a case-control study, investigators identify a group of individuals with a disorder (the cases) and a group of individuals without the disorder (the controls). Then, the researchers employ some method (e.g. interviews, examination of medical records) to find out if the cases and controls were exposed to some potential risk factor in the past. They compare rates of exposure between the two groups and, if more cases than controls had exposure to the risk factor, it allows the researchers to make an argument for this factor as something that may increase the risk of developing the disease/disorder.

If you take any introductory epidemiology (i.e. the study of disease) course, however, you will learn that a case-control study is fraught with limitations. For, even if you find that a particular exposure is frequently associated with a particular disease, you still have no way of knowing if the exposure is causing the disease or if some other factor is really the culprit. For example, in a study done at the University of Pennsylvania in the late 1990s, researchers found that children who slept with nightlights on had a greater risk of nearsightedness when they got older. This case-control study garnered a lot of public attention as parents began to worry that they might be ruining their kids' eyesight by allowing them to use a nightlight. Subsequent studies, however, found that children inherit alleles for nearsightedness from their parents. Nearsighted parents were coincidentally more likely to use nightlights in their children's rooms (probably because it made it easier for the nearsighted parents to see).

A variable that isn't part of the researcher's hypothesis, but still influences a study's results is known as a confounding variable. In the case of the nearsightedness study, the confounding variable was genetics. Case-control studies are done after the fact, and thus experimenters have little control over other influences that may have affected the development of disease. Thus, there are often many confounding influences on relationships detected in case-control studies.

So, a case-control study can't be used to confirm a cause-and-effect connection between an exposure and a disorder or disease. What it can do is provide leads that scientists can then follow up on using a more rigorous experimental design (like a cohort study or randomized trial). Indeed, the scientific literature is replete with case-control studies that ended up being false leads. Sometimes, however, case-control results have been replicated with better designs, leading to important discoveries. This is exactly what happened with early reports examining smoking and lung cancer.

Back to the recent study conducted by Harrington et al. The authors found that SSRI use during the first trimester was more common in mothers whose children went on to develop ASD than in mothers who had children who developed normally. The result was only barely statistically significant. This fact combined with the variability seen in the confidence interval suggests it is not an overly-convincing finding--but it was a finding nonetheless. In addition to an increased risk of ASD, the authors also point out that SSRI exposure during the second and third trimesters was higher among mothers of boys with other developmental delays. Again, however, the effect was just barely statistically significant and even less convincing than the result concerning ASD.

So, the study ended up with some significant results that aren't all that impressive. Regardless, because this was a case-control design, there is little we can conclude from the study. To realize why, think about what other factors women who take SSRIs might have in common. Perhaps one of those influences, and not the SSRI use itself, is what led to an increased risk of ASD. For example, it seems plausible that the factors that make a mother more susceptible to a psychiatric disorder might also play a role in making her child more susceptible to a neurodevelopmental disorder. In fact, a cohort study published last year with a much larger sample size found that, when the influence of the condition women were taking SSRIs for was controlled for, there was no significant association between SSRI use during pregnancy and ASD.

The fact that this case-control study doesn't solve the mystery of ASD isn't a knock on the study itself. If anything, it's a knock on some of the science writing done in response to the study. I can't go so far as to say these types of studies shouldn't be reported on. But, they should be reported on responsibly, and by writers who fully understand and acknowledge their shortcomings. For, it is somewhat misleading to the general public (who likely isn't aware of the limitations of a case-control study) when headlines like this appear: "Study: Moms on antidepressants risk having autistic baby boys."

The safety of SSRI use during pregnancy is still very unclear. But both SSRIs and untreated depression during pregnancy have been linked to negative health outcomes for a child. Thus, using SSRIs during pregnancy is something a woman should discuss at length with her doctor to determine if treatment of the underlying condition poses more of a risk than leaving the condition untreated. In making that decision, however, the barely significant findings from a case-control study should not really be taken into consideration.

Study: Moms on antidepressants risk having autistic baby boys
Read more at http://www.wnd.com/2014/04/study-moms-on-antidepressants-risk-having-autistic-baby-boys/#u7bijOy3WcbiiLQK.99
Rebecca A. Harrington, Li-Ching Lee, Rosa M. Crum, Andrew W. Zimmerman, Irva Hertz-Picciotto (2014). Prenatal SSRI Use and Offspring With Autism Spectrum Disorder or Developmental Delay PEDIATRICS DOI: 10.1542/peds.2013-3406d

Thursday, April 17, 2014

Dear CNRS: That mouse study did not "confirm" the neurobiological origin of ADHD in humans

Late last week the French National Centre for Scientific Research (CNRS - the acronym is based on the French translation) put out a press release describing a study conducted through a collaboration between several of its researchers and scientists from The University of Strasbourg. CNRS is a large (30,000+ employees), government-run research institution in France. It is the largest research organization in Europe, and is responsible for about 1/2 of the French scientific papers published annually.

The study in question, conducted by Mathis et al., investigated the role of a brain region called the superior colliculus in disorders of attention. The superior colliculus, also known as the tectum, is part of the brainstem. It is strongly connected to the visual system and is thought to play an important role in redirecting attention to important stimuli in the environment. For example, imagine you are sitting in your favorite coffee shop quietly reading a book, when someone in a gorilla suit barges in and runs through the middle of the room. You would likely be surprised and you would, somewhat reflexively, direct your attention towards the spectacle. This rapid shift in attention would be associated with activity in your superior colliculus.

It has been proposed that individuals who suffer from disorders like attention-deficit hyperactivity disorder (ADHD) may experience increased activity in the superior colliculus, which causes rapid, uncontrolled shifts of attention. Mathis et al. investigated the role of the tectum in attention using mice with a genetic abnormality that makes the superior colliculus hypersensitive.

The researchers exposed mice with this defect to a series of behavioral tests. They found that the mice performed normally on tests of visual acuity, movement, and sensory processing. However, the mice seemed to be less wary than control mice of entering areas of bright light (usually something mice avoid as open spaces make them vulnerable to attacks from natural predators). Additionally, the mice performed worse on a task that required them to inhibit impulses. These abnormalities in behavior were associated with increased levels of the neurotransmitter norepinephrine in the superior colliculus.

The authors of the study mention that their work supports the hypothesis that superior colliculus overstimulation is a contributing factor in ADHD. I have no qualms with the verbiage used in the paper, but CNRS's press release about the study is titled "Confirmation of the neurobiological origin of attention-deficit disorder" and they state in the article: 
"A study, carried out on mice, has just confirmed the neurobiologial origin of attention-deficit disorder (ADD)..."
When it comes to psychiatric disorders without a clearly defined molecular mechanism (which is almost all of them), it is improbable that a finding in mice can confirm anything in humans. Our understanding ADHD in humans is limited. We have no objective diagnostic criteria; instead we base diagnosis on observable and self-reported symptoms.

If our understanding of a disorder in humans is based primarily on symptomatology (as opposed to the underlying pathophysiology), then it makes the results of experiments that use animals to model the disorder more difficult to interpret. For, if we don't know what molecular changes we can expect to see as a correlate of the disease (e.g. senile plaques in Alzheimer's), then we are resigned to trying to match symptoms of mice with symptoms of men. In this type of situation where we don't know the true pathophysiology, we can never be sure that the symptoms we are seeing in mice and those we are seeing in men have an analogous biological origin.

Thus, when it comes to psychiatric disorders, translating directly from animals to humans is difficult. In the case of ADHD, because the biological origins of the disorder are still mostly unknown, animal models can be used as a means to explore the neurobiology of a similar manifestation of symptoms in the animal. They can't, however, be used to "confirm" anything about the human disorder. In this case, CNRS drastically overstated the importance of the study. Of course, the wording used by CNRS in their initial press release was also found on dozens of other media outlets after they picked up the story.

Do I doubt that ADHD has a neurobiological origin? No. But the study by Mathis et al. did not confirm that it does. CNRS, as an institution of science, should be more careful about the claims they make in their communications with the public.

Mathis, C., Savier, E., Bott, J., Clesse, D., Bevins, N., Sage-Ciocca, D., Geiger, K., Gillet, A., Laux-Biehlmann, A., Goumon, Y., Lacaud, A., Lelièvre, V., Kelche, C., Cassel, J., Pfrieger, F., & Reber, M. (2014). Defective response inhibition and collicular noradrenaline enrichment in mice with duplicated retinotopic map in the superior colliculus Brain Structure and Function DOI: 10.1007/s00429-014-0745-5


Saturday, April 12, 2014

Early brain development and heat shock proteins

The brain development of a fetus is really an amazing thing. The first sign of an incipient nervous system emerges during the third week of development; it is simply a thickened layer of tissue called the neural plate. After about 5 more days, the neural plate has formed an indentation called the neural groove, and the sides of the neural groove have curled up and begun to fuse together (see pic to the right). This will form the neural tube, which will eventually become the brain and spinal cord. By around 10 weeks, all of the major structures of the brain are discernible, even if they are not yet fully mature. So, in a matter of two months, the framework for the human brain is built from scratch. If that doesn't put you in awe of nature, nothing will.

Although the process of neural development is amazing, it is also very sensitive. There are indications that a number of environmental exposures during prenatal development may increase the risk of disorders like autism, schizophrenia, and epilepsy. Some of these dangerous environmental exposures are well known (e.g. alcohol consumption during pregnancy increasing the risk of developing fetal alcohol syndrome). However, there are a number of other factors whose detrimental effects on fetal neural development are still debated or have not yet been fully elucidated. For example, the effects on a fetus of substances like phthalates (plasticizers that are likely found in a number of products throughout your home), bisphenol A (another substance used in the production of plastics - found frequently in food and drink containers), and even tobacco smoke, are still being investigated. But a pregnancy free from exposure to any potentially harmful substances doesn't guarantee normal neural development. Even factors that are natural and more difficult to control, like maternal infection during pregnancy, are suspected of being detrimental in some cases.

To complicate the issue even further, it is difficult to predict who will be affected by these environmental insults and who will not. It seems that there may be a genetic susceptibility to neurodevelopmental damage that causes a particular exposure to be detrimental to one fetus, while it may not have a major impact on another with a different genetic makeup. This complication, however, also provides an opportunity to learn more about the etiology of neurodevelopmental disorders. For, if we can learn what mechanism is failing in the fetus who is affected, but functioning in the fetus who is not, then our understanding of the origin of these disorders will be drastically improved.

In a paper published last week in Neuron, Hashimoto-Torii et al. approached the problem from this angle and examined the role of heat shock proteins in neurodevelopmental problems. Heat shock proteins are peptides whose expression is increased during times of stress. They earned their name when it was discovered in the early 1960s that high levels of heat increased their expression in Drosophila (fruit flies). Since, it has been learned that heat shock protein expression is increased during all sorts of stress, including infection, starvation, hypoxia (lack of oxygen), and exposure to toxins like alcohol. Thus, some also refer to heat shock proteins as stress proteins.

To investigate the role of heat shock proteins in neurodevelopmental disorders, Hashimoto-Torii et al. exposed mouse embryos to three different types of environmental insults. They injected pregnant mice with either alcohol, methylmercury, or a seizure-inducing drug. Then, they looked to see how the brains of the embryos reacted. As they hypothesized, they saw a significant increase in the expression of a transcription factor (heat shock factor 1 or HSF1) that promotes the production of heat shock proteins.

When the researchers investigated the effects of prenatal exposure to the insults listed above in mice who lacked an HSF1 gene (HSF1 knockout mice), they saw that the exposed moms had smaller litters than control mice. The mice that were born, however, also displayed malformations consistent with neurodevelopmental damage, greater susceptibility to seizures after birth, and reduced brain size. The reduction in brain volume seemed to be due to decreased neurogenesis after the insult.

To make a clearer connection between heat shock protein activation and human disease, the researchers exposed stem cells derived from schizophrenic patients to methylmercury and alcohol, and compared the response of the "schizophrenic cells" to the response of cells from non-schizophrenic (control) patients. They didn't see an overall difference in heat shock protein expression between the two types of cells, but they did see significant variability in expression among the schizophrenic cells. In other words, both schizophrenic and control cells increased expression of heat shock protein after an insult, but some of the schizophrenic cells appeared to increase expression more or less than others. The control cells all displayed a relatively similar increase in expression. This suggests that there may be an abnormal response involving heat shock proteins in individuals with a certain genetic predisposition; perhaps this abnormal response makes the individual more susceptible to disrupted neurodevelopment.

Thus, the study by Hashimoto-Torii et al. points to heat shock proteins as a potential culprit behind what goes wrong in early brain development to lead to psychiatric disorders like schizophrenia and autism. More research will need to be done, however, to verify this role for heat shock proteins. And, even if future research supports this finding, it is likely that heat shock proteins are still only part of the puzzle. But the puzzle is complex, and so we will need to add many of these little pieces before we can begin to comprehend the whole picture.


Hashimoto-Torii, K., Torii, M., Fujimoto, M., Nakai, A., El Fatimy, R., Mezger, V., Ju, M., Ishii, S., Chao, S., Brennand, K., Gage, F., & Rakic, P. (2014). Roles of Heat Shock Factor 1 in Neuronal Response to Fetal Environmental Risks and Its Relevance to Brain Disorders Neuron DOI: 10.1016/j.neuron.2014.03.002



Wednesday, April 9, 2014

Why do I procrastinate? I'll figure it out later

If you are a chronic procrastinator, you're not alone. Habitual procrastination plagues around 15-20% of adults and 50% of college students. In a chronic procrastinator, repeated failure to efficiently complete important tasks can lead to lower feelings of self-worth. In certain contexts, it can also result in very tangible penalties. For example, a survey in 2002 found that 29% of American tax-payers procrastinated on their taxes, resulting in errors due to rushed filing that cost an average of $400 per person. More importantly, we tend to procrastinate when it comes to medical care (both preventive and therapeutic), which can involve very real costs to our well-being.

Why is the urge to procrastinate so strong? It sometimes seems that we are compelled to procrastinate by a force that is disproportionate to the small reward we may get from putting off a task we're not looking forward to. According to Gustavson et al., the authors of a study published last week in Psychological Science, a predisposition to procrastinate may have its roots in our genes.

Previous research has suggested a potential link between a tendency to procrastinate and an impulsive nature. Gustavson et al. explored this possible connection by observing the traits of procrastination and impulsivity in a group of 181 identical and 166 fraternal twins. Because identical twins share 100% of their genes and fraternal twins only share around 50% of their genes, if a trait is shared by identical twins more frequently than it is by fraternal twins, it suggests the trait has a significant genetic basis (for more on twin studies see this post).

The investigators reported a significant correlation between procrastination and impulsivity (r = .65). The group also reported that their genetic model determined that procrastination and impulsivity were perfectly correlated (r = 1.0), suggesting that the genetic influences on procrastination and impulsivity are completely shared. In other words, according to this study, there are no genetic influences on procrastination that aren't also affecting impulsivity.

But why would these two traits be associated with one another? Procrastination involves putting things off, while impulsivity involves doing them on a whim. Gustavson et al. suggest that both procrastination and impulsivity involve a failure in goal management and a deficit in the ability to guide behavior effectively using goals. The authors refer to a hypothesis proposed by procrastination researcher Piers Steel that suggests impulsivity may have been adaptive to our ancient ancestors when survival depended more on thinking and acting quickly. In today's much safer world, however, planning for events yet to come has superceded impulsivity in terms of importance.

Thus, like many of our other bad habits, procrastination may have its roots in a behavior that was at one point adaptive and is now outdated. So, if it feels like your desire to procrastinate is driven by a force much stronger than your willpower, it may be so. If Gustavson et al. are correct, the impetus for procrastination lies in genetic programming that dates back to the Pleistocene era.

Gustavson, D., Miyake, A., Hewitt, J., & Friedman, N. (2014). Genetic Relations Among Procrastination, Impulsivity, and Goal-Management Ability: Implications for the Evolutionary Origin of Procrastination Psychological Science DOI: 10.1177/0956797614526260

Monday, April 7, 2014

Is ketamine really a plausible treatment for depression?

Last week, a publication in the Journal of Psychopharmacology made international news by reporting that patients with treatment-resistant depression (TRD) showed improvement after being given the dissociative hallucinogenic drug ketamine. Ketamine, which is traditionally used as an anesthetic in humans and other animals, is probably better known for its use as a party drug (in this context it is often called "special K"). However, a growing body of evidence has begun to suggest that ketamine may be effective (at least in the short-term) in treating depression.

I'm a bit surprised by the headlines prompted by this recent publication, though, for a number of reasons. The study, conducted by a group of scientists at Oxford, didn't really present any groundbreaking--or extremely convincing--data. The group explored the effects of ketamine infusions over a period of three weeks. Similar protocols of ketamine administration have been tested in the past (with similar results). However, the recently-published study had some shortcomings that make it a bit less convincing than some prior ketamine studies. First, there was no control group. All patients received ketamine and, although 29% of the participants showed improvement (a modest effect but relevant because these patients experienced little benefit from other treatments in the past), there is not a group whom their improvement can be compared to in order to gauge the true effects of the drug. Additionally, this was an open-label study, which means that the investigators and patients all knew that ketamine was being administered. In other words, there was no possibility that a placebo might be given. This could create expectancy effects in the patients and investigators, making the need for a control group all the more important.

The investigators were aware of these shortcomings in the study design; they initiated the study as an exploratory venture. They were interested in knowing how ketamine infusions over a prolonged period affected memory when patients also continued to take other antidepressant medications. So, they were mostly concerned with examining safety and effects on memory (they did not observe any detrimental effects on memory), not with assessing the benefit of the treatment.

But the fact remains that, despite the headlines, this study was not a huge advancement in depression research or even research into the use of ketamine to treat TRD. There is some intrigue (especially in the media) surrounding the use of ketamine as an antidepressant because of its notoriety as a taboo recreational substance. I assume this is why a relatively minor study was reported on in major media outlets across the world.

Ketamine is also an intriguing treatment for depression in the eyes of scientists, but its abuse status has nothing to do with that. It's interesting because ketamine is thought to work as an antagonist at receptors for glutamate called NMDA receptors. Since hypotheses regarding the mechanism of depression have historically focused on monoamines like serotonin, ketamine's unique (although as yet not fully elucidated) mechanism suggests there may be other valid approaches to treating depression.

However, any publicly-available ketamine treatment is at best far off and at worst improbable. 29% (the same percentage that saw a benefit) of the participants in the Oxford experiment withdrew, either due to lack of perceived benefit or adverse reactions. The adverse reactions ranged from anxiety and panic to a vasovagal reaction that caused a "reduced level of consciousness" and lasted for 10 minutes. Two of the patients vomiting during infusions. So, although the reported improvements in a minority of patients are dramatic, there are also significant adverse effects that would make treatment undesirable for other patients. Additionally, very little is known known about the potential long-term effects of ketamine treatment; there are some indications ketamine has the potential to be neurotoxic.

Ketamine may have a role to play in helping us to understand depression. But right now it is very unclear if this drug will ever be of real use in treating patients with TRD on a large scale. So, these media reports about the excitement surrounding ketamine should be taken with a grain of salt.

Diamond, P., Farmery, A., Atkinson, S., Haldar, J., Williams, N., Cowen, P., Geddes, J., & McShane, R. (2014). Ketamine infusions for treatment resistant depression: a series of 28 patients treated weekly or twice weekly in an ECT clinic Journal of Psychopharmacology DOI: 10.1177/0269881114527361