Associating brain structure with function and the bias of more = better

It seems that, of all of the behavioral neuroscience findings that make their way into popular press coverage, those that involve structural changes to the brain are most likely to pique the interest of the public. Perhaps this is because we have a tendency to think of brain function as something that is flexible and constantly changing, and thus alterations in function do not seem as dramatic as alterations in structure, which give the impression of being more permanent.

After all, until relatively recently it was believed that we are born with a fixed number of neurons--and that was it. From the end of neural development through the rest of one's life it was thought that no new neurons were produced; thus, the inevitable occurrence of neuronal death ticked off a slow but inexorable decline into cognitive obscurity that we were helpless to prevent. We now know that this is not true, however, and that there are new neurons produced throughout one's lifespan in certain areas of the brain.

Regardless, perhaps that outdated thinking on the immutability of the brain causes people to be especially impressed by the mention of some activity changing the structure of the brain, because there is no shortage of articles in the popular press covering studies that involve changes to brain structure. Just since the beginning of 2015, there have been major news stories about methamphetamine, smoking, childhood neglect, and head trauma in professional fighters all leading to reductions in grey matter, as well a more positive story that music training in children can lead to increased grey matter in some areas of the cortex. And it seems like stories about meditation's ability to increase grey matter are constantly being recycled among blogs and popular news sites. In all of these stories it is either implied or stated without much supporting evidence that adding more grey matter to a part of the brain increases cognitive function, while losing grey matter decreases it. For example, in a story about smoking and its effects on the cortex, the reduction in grey matter was described in this way: "As the brain deteriorates, the neurons that once resided in each dying layer inevitably get subtracted from the overall total, impairing the organ’s function."

Of course, in neurodegenerative diseases like Alzheimer's disease, we know that loss of brain tissue corresponds to increasingly more severe deficits. But what do we know for sure about non-pathological changes in the structure/size of the brain, and how they are associated with function? Is it widely accepted that more grey matter is equivalent to improved function and less to diminished function? Contrary to what you might conclude if you read some recent news articles--and in many cases, the studies they summarize--on these subjects, we have not found a consistent formula for predicting how changes in structure will affect function. And so, it is not a universal rule that more is better when it comes to grey matter.

More brain mass = better brain function?

The hypothesis that larger brain size is associated with increased mental ability can be traced back at least to the ancient Greeks (possibly to the physician Erasistratus). It has had the support of a large share of the scientific community since the 1800s when some of the first formal experiments into the matter were conducted by Paul Broca. Broca recorded the weights of autopsied brains and found a positive association between education level and brain size. These findings would later be cited by Charles Darwin as he made a case for the large brain of humans as an evolved trait responsible for our superior intelligence compared to other species.

Indeed, the hominid brain has had a unique evolutionary trajectory. It is thought to have tripled in size about 1.5 million years ago, in the process creating a large disparity between our brains and those of our non-human primate relatives like the great apes. This rapid brain expansion, along with our highly developed cognitive abilities, has led many to suggest our brain growth was directly connected to increased capacities for intellect, language, and innovation. Humans don't have the biggest brains in the animal kingdom, however (that distinction belongs to sperm whales, which have brains that weigh about six times what ours do), but it is thought that the way the human brain grew--by adding more to cortical areas that are devoted to conceptual, rather than perceptual and motor processing--may have been responsible for our accelerated gains in intellectual ability.

Thus, while brain size has come to be considered an important indicator of the intelligence of humans in comparison to other species, it is thought that the cerebral cortex is really the defining feature of the human brain. The cortex makes up about 80% of our brain, a proportion that is much higher than that seen in many other mammalian species. The intricate folding and complex circuitry of the cerebral cortex may contribute to the greater intellectual capabilities we have compared to other species.

The advent of neuroimaging and its refinement over the last 15 years have allowed us to test in vivo the hypothesis that brain size among individuals corresponds to intelligence and, more specifically, that the thickness of the cerebral cortex is especially linked to cognitive abilities. Many of the results have supported these hypotheses. For example, a recent analysis of 28 neuroimaging studies found a significant average correlation of .40 between brain size and general mental ability. Another study of 6-18 year-olds in the United States found cortical thickness in several areas of the brain to be associated with higher scores on the Wechsler Abbreviated Scale of Intelligence. Choi et al. (2008) also found a positive correlation between cortical thickness and intelligence measures like Spearman's g; in addition they were able to explain 50% of the variance in IQ scores using estimates of cortical thickness in conjunction with functional magnetic resonance imaging data.

Recent studies have also allowed us to observe that brain size is not completely static, and that experience (as well as age) can alter the size or structure of certain parts of the brain. For example, one well-known study looked at the brains of London taxi drivers and found that taxi drivers had larger hippocampi compared to control subjects (hypothetically because the hippocampus is involved in spatial memory). The size of hippocampi was also correlated with time spent driving a taxi, suggesting it may have been the experience of navigating a cab through London that promoted the hippocampal changes, and not just that people with better spatial memory were more likely to become taxi drivers. In another study, participants underwent a brain scan, then spent a few months learning how to juggle, then had a second brain scan. The second scan showed increased grey matter in areas of the cortex associated with perceiving visual motion. When they stopped juggling, then had a third brain scan a few months later, the size of these areas had again decreased.

Based on such results, a number of studies have also looked at how certain activities might affect cortical thickness, with the assumption that increased cortical thickness represents an enhancement of ability. For example, an influential study published in 2005 by Lazar et al. examined the brains of 20 regular meditators (with an average of about 9 years of meditation experience) and compared the thickness of their cortices to that of people with no meditation experience. The investigators found that the meditators had increased grey matter in areas like the prefrontal cortex and insula; the authors interpreted this as being indicative of enhanced attentional abilities and a reduction of the effects of aging on the cortex, among other things.

Problems with the increased mass = improved function hypothesis

But there are a few problems with conclusions like those made in the meditation study conducted by Lazar et al. The first is that in the Lazar et al. study--and in many studies that look at brain structure and associate it with function--brain structure was only assessed at one point in time. The issue with only looking at brain structure once is that it doesn't allow one to determine if the structure changed before or after the experience. In other words, in the Lazar et al. study perhaps increased cortical thickness in prefrontal areas was associated with personality traits that made individuals more likely to enjoy meditating, instead of meditating causing the structure of the cerebral cortex to change.

Regardless, even if we knew the behavior (e.g. meditation) came before the structural changes, it still would be unclear how the structural modifications translated into changes in function. Postulating, as Lazar et al.did, that the changes may have been associated with an increased ability to sustain attention or be self-aware is a clear example of confirmation bias, as the authors are interpreting results in such a way as to support their hypothesis when truly they lack the evidence to do so. However, even if we agreed with the specious reasoning of Lazar et al. that "the right hemisphere is essential for sustaining attention" and thus structural changes to it are especially likely to affect attention, it is far from conclusive that increased cortical thickness in general represents increased function.

In fact, there are myriad examples in the literature that suggest there is not always a positive correlation between cortical thickness and cognitive ability. For example, a study involving patients with untreated major depressive disorder found they had increased cortical thickness in several areas of the brain. Another study detected increased cortical thickness in frontal, temporal, and parietal lobes in children and adolescents born with fetal alcohol spectrum disorder. A 2011 investigation involving binge drinkers observed thicker cortices in female binge drinkers, and thinner cortices in male binge drinkers. And a report examining the effects of education level on cortical thickness found those with higher education levels actually had thinner cortices. This is just a small sampling of the many studies indexed on PubMED that do not support the increased mass = increased function hypothesis. The reasons for this lack of consistent support could be numerous, ranging from confounds to measurement problems. But the important point is that there is no axiom that increased grey matter will result in improved functionality.

Of course this doesn't mean there is never an association between brain volume or cortical thickness with brain function. We should, however, be interpreting studies that identify such associations with caution. And we should be even more wary of popular press articles that summarize such studies, for when these studies are reported on by the media, the complexity of the association between structure and function--as well as the cross-sectional limitations of many of the studies--are rarely taken into consideration.

There are some methodological approaches to these types of studies that would allow investigators to improve their ability to make confident interpretations of the results. First, an emphasis should be placed on using longitudinal designs. In other words, an initial brain scan should be conducted first to get a baseline measure of brain structure. Then the activity (e.g. meditation) should be performed for a period of time before brain structure is assessed at least one more time. This reduces the weight that must be given to the concern that changes in structure predated changes in function. Additionally, it is best for some measure of function to be taken along with an assessment of brain structure to support any suggestion that the structural changes correspond to a functional change.

Many studies are already starting to utilize such approaches. For example, a recent study by Santarnecchi et al. examined the effects of mindfulness meditation on brain structure. The researchers conducted brain scans before and after an 8-week Mindfulness Based Stress Reduction (MBSR) program, using participants who had never meditated before. Participants also underwent psychological evaluations before and after the program. The participants experienced a reduction in anxiety, depression, and worry over the course of the MBSR course. But they also showed an increase in cortical thickness--in the right insula and somatosensory cortex. The Lazar et al. study noted an increase in volume of the right insula as well, suggesting there may be something to this finding. And because of the longitudinal design of the study by Santarnecchi et al., along with the fact that the participants were not regular meditators, we can have more confidence that the structural changes seen came as a result of--instead of predated--the meditation practice.

As can be seen from this example where, despite the methodological problems of the Lazar et al. study, some of the results were later supported, a too liberal interpretation of such results does not suggest the findings are baseless. If we are going to have confidence in the value of associations between structure and function, however, more emphasis must be placed on using study designs that allow one to infer causality. And, when it comes to the interpretation of results, a conservative approach should be advocated.

Lazar, S., Kerr, C., Wasserman, R., Gray, J., Greve, D., Treadway, M., McGarvey, M., Quinn, B., Dusek, J., Benson, H., Rauch, S., Moore, C., & Fischl, B. (2005). Meditation experience is associated with increased cortical thickness NeuroReport, 16 (17), 1893-1897 DOI: 10.1097/01.wnr.0000186598.66243.19

Neuromyths and the disconnect between science and the public

When the movie Lucy was released in the summer of 2014, it was quickly followed by a flurry of attention surrounding the idea that we only use 10% of our brains. According to this perspective, around 90% of our neurons lie dormant, all the while teasing us by reminding us that we have only achieved a small fraction of our human potential. In the movie, Scarlet Johansson plays a woman who takes an experimental new drug that makes her capable of using 100% of her brain. Due to this sudden enhancement in brain utilization, she develops unprecedented cognitive abilities, as well as some extrasensory capabilities like telepathy and telekinesis. Thus, the plot of the movie actually hinges on the accuracy of the 10% of the brain idea. Unfortunately, it is an idea that has at this point been thoroughly debunked. In truth, it appears that all of our brain is active fairly constantly. Although some areas may be more active at times than others, there are no areas that ever go completely dark.

Despite this understanding of full-brain utilization being widely endorsed throughout the scientific community, the 10% of the brain myth is still accepted as true by a significant proportion of the public (which might explain why Lucy was able to rake in close to $460 million worldwide). In fact, a recent survey of educators in five different countries--the United Kingdom, The Netherlands, Turkey, Greece, and China--found that the percentage of teachers who believe in the 10% of the brain myth ranges from a low of 43% in Greece to a high of 59% in China.

The same survey identified a number of other inaccurate beliefs about the brain held by educators--beliefs that have come to be categorized as neuromyths. For example, 44-62% of teachers believed that consuming sugary drinks and snacks would be likely to affect the attention level of their students. The idea that sugar intake can decrease children's attention and increase their hyperactivity has been around since the 1970s. The first formal study to identify a relationship between sugar and hyperactivity was published in 1980, but it was an observational study without the ability to make any determination of a causal relationship. Since then more than a dozen placebo-controlled studies have been conducted, but a relationship between sugar and hyperactivity has not been supported. In fact, some studies found sugar to be associated with decreased activity. Yet, if you spend an afternoon at a toddler's birthday party, your chances of overhearing a parent attributing erratic toddler behavior to cake are around 93% (okay, that's a made-up statistic but the chances are high).

According to the survey, a large percentage (ranging from 71% in China to 91% in the U.K.) of teachers in the countries mentioned above also believe that hemispheric dominance is an important factor in determining individual differences in learning styles. In other words, they believe that some people think more with their "left brain" and others more with their "right brain," and that this lateralization of brain function is reflected in personalities and learning styles. The concept of hemispheric dominance has been an oft-discussed one in neuroscience since the 1860s when Paul Pierre Broca identified a dominant role for the left hemisphere (in most individuals) in language. Since the middle of the twentieth century, however, the concept of hemispheric dominance has been extrapolated to a number of functions other than language. Many now associate the right side of the brain with creativity and intuitive thinking, while the left side of the brain is linked to analytical and logical thought. By extension, people who are creative are sometimes said to be more "right-brained," while those who are more analytical are said to be "left-brained."

These broad characterizations of individuals using one side of their brain more than the other, however, are not supported by research. Although there are certain behaviors that seem to rely more on one hemisphere than the other--language and the left hemisphere being the best example--overall the brain functions as a whole and in general an individual doesn't preferentially use one hemisphere more based on his personality. Still, this myth has become so pervasive that a recommendation that children be identified as right- or left-brained and teaching approaches be modified accordingly has even made its way into educational texts.

Where do neuromyths come from?

Neuromyths are not generally created nor spread with malicious intent. Although there may be instances where inaccurate neuroscience information is used by entrepreneurs hoping to convince the public of the viability of a dubious product, usually neuromyths arise out of some genuine scientific confusion. For example, the misconception that sugar causes hyperactivity in children was bolstered by a study that did detect such an effect. The scientific status of the hypothesis, however, was forced to remain in limbo for a decade until more studies could be conducted. Those subsequent, better-designed studies failed to find a relationship, but by that time the myth had taken on a life of its own. The fact that it was so easy for parents to mistakenly attribute poor behavior to a previous sugary treat helped to sustain and propagate the inaccuracy.

So, some neuromyths are born from a published scientific finding that identifies a potential relationship and grow simply because it then takes time--during which faulty information can spread--to verify such a finding. Many myths also arise, however, from inherent biases--both personal and cultural--or the misinterpretation of data. At times, the sheer complexity of the field of neuroscience may be a contributing factor, as it may cause people to seek out overly simplistic explanations of how the brain works. These oversimplifications can be alluring because they are easy to understand, even if at times they are not fully accurate. For example, the explanation of depression--a disorder with a complex, and still not understood etiology--as being attributable to an imbalance of one neurotransmitter (i.e. serotonin) was popular in scientific and public discourse for decades before being recognized as too simplistic. 

After the scientific community has untangled any confusion that may have led to the creation of a neuromyth, it still takes quite a long time for the myth to die out. In part, this is because there is a disconnect between the information that is readily available to the public and that which is accessible to the scientific community. Most of the general public do not read academic journals. So, if several studies that serve to debunk a neuromyth are published over the course of a few years, the public may be unaware of these recent developments until the findings are communicated to them in some other form (e.g. a website or magazine that has articles on popular science topics).

Eventually this knowledge does find its way into the public discourse, though, it's just a question of when. The 10% of the brain myth, for example, has probably lasted for at least a century. But by 2014 enough of the non-scientific community was aware of the inaccuracies in the plot of Lucy to raise something of an uproar about them. There are other neuromyths, however, that have more recently become part of public knowledge, and thus we are likely to see them come up again and again over the next several years before widespread appreciation of their erroneousness emerges.

The myth of three, a current neuromyth

One example of a (relatively) recently espoused neuromyth is sometimes referred to as the "myth of three." The underlying idea of the myth of three is that there is a critical period between birth and three years of age, during which most of the major events of early brain development occur. This is a time when there is extensive synaptogenesis--a term for the formation of new connections between neurons--occurring. According to the myth of three, if the external environment during this time isn't conducive to learning or healthy brain development, the effects can range from a missed--and potentially lost--opportunity for cognitive growth to irreversible damage. Conversely, by enriching a child's environment during these first three years, you can increase the chances he will grow up to be the next Doogie Howser, MD. In other words, ages 0 to 3 represent the most important years of a child's learning life and essentially determine the course for his or her future cognitive maturation. Hillary Clinton summarized the myth of three appropriately while speaking to a group of educators in 1997 when she said: “It is clear that by the time most children start preschool, the architecture of the brain has essentially been constructed."

There are several problems with the myth of three. The first is with the assumption that age 0-3 is the most critical learning period in our lifetime. This assumption is based primarily on evidence of high levels of synaptogenesis during this time, but it is not clear that this provides convincing support for the unrivaled importance of 0-3 as a learning period. Synaptic density does reach a peak during this time, but it then remains at a high level until around puberty, and some skills that we begin to learn between ages 0 to 3 continue to be improved and refined over the years. In fact, with certain skills (e.g. problem solving, working memory) it seems we are unable to attain true intellectual maturity until we have spent years developing them. Thus, it is questionable if high levels of synaptogenesis are adequate evidence to suggest age 0-3 is the most critical window for learning that we have in our lives.

Also, it is not clear that a disruption in learning during the "critical period" of age 0-3 would have widespread effects on brain function. There are critical or sensitive periods that have been identified for certain brain functions, and some--but not all--rely on external stimulation for normal development to occur. For example, there are aspects of vision that depend on stimulation from the external environment to develop adequately. Indeed, for a complex ability like vision there are thought to be different critical periods for different specific functions like binocular vision or visual acuity. Some of these critical periods do occur between birth and age three. However, some extend well past age 3 and do not correlate well with the period of high synaptogenesis thought to be so important by those who originally advocated the myth of three. Because these critical periods significantly differ by function, it is inaccurate to refer to age 0-3 as a critical period for brain development in general. Thus, deprivation of stimulation during this age range has the potential to affect certain functions, depending on the specific time and severity of the deprivation, but the type of widespread cognitive limitations implied by the myth of three do not seem likely to occur. Additionally, much of the data used to support the assertions of the critical period of 0-3 years involves severe sensory deprivation rather than missed learning opportunities, even though the myth of three has more frequently been used to warn us of the ramifications of the latter.

Furthermore, a dearth of stimulation or lack of learning during a critical or sensitive period doesn't always translate into an irreparable deficit. For example, if a baby is not exposed to a language before 6 months of age, she will have a harder time distinguishing the eccentricities in speech sounds that make up the language. However, the fact that many adults are able to acquire a second language without having been exposed to it before 6 months of age suggests this doesn't translate into an inability to ever learn the language; it just makes learning that language more difficult. So, even when the environmental conditions aren't conducive to the development of a specific function, it seems our brain is still capable of rescuing that function if learning is resumed later in life.

Additionally, while improving the environment of a child growing up in some form of severe deprivation is beneficial, it is not clear how much enriching the environment of a child already growing up in good conditions will hasten brain development. Yet, this principle forms the cornerstone of a multimillion dollar industry. That industry markets to parents, advertising ways they can raise their infants' IQs by exposing them to things like Baby Mozart CDs, with the hopes that exposure to the music of Mozart will in some subliminal way lay the foundation for more rapid intellectual growth. Of course, there could be more harmful ramifications of misunderstood science than making parents more invested in their child's intellectual development. But sometimes that investment comes with a good dose of anxiety, high expectations, and wasted resources that could have been better used in other ways.

The myth of three, however, was not started by companies marketing goods to produce baby geniuses, this only helped to propagate it. The myth of three is a good example of how legitimate scientific confusion can engender the development of a neuromyth. Our understanding of early brain development, sensitive periods, and the best age for educational interventions is still evolving, and the details continue to be debated. And, because early interventions do seem to be beneficial for children raised in impoverished conditions, there was a plausible reason people expected environmental enrichment could augment the development of already healthy children. The fact that the myth is partially based in truth and that some of the answers are not yet clear makes it likely that this will be an especially persistent belief.

Where do we go from here?

As can be seen from the proliferation of the myth of three, it seems our awareness of the potential for neuromyths to develop is not enough to stop them from doing so. So how can we at least reduce the impact of these inaccurate beliefs? One way is by improving the neuroscientific literacy of our public; a step toward accomplishing this involves increasing the neuroscientific knowledge of our educators. There is a new field emerging to address the disconnect between recent neuroscience research and the information possessed by educators, although it is so new that is still awaiting a name (neuroeducation is one possibility).

Hopefully, as the field of neuroscience itself also grows, exposure to and understanding of neuroscientific topics among the public will increase. This may make it more difficult for ideas like the 10% of the brain myth to maintain a foothold. It may be impossible to fully eradicate the existence of neuromyths, as they are often based on legitimate scientific discoveries that are later found to be specious (and of course we would need scientific perfection to ensure that false leads are never created). However, awareness of the potential for erroneous beliefs to spread along with an increased understanding of how the brain works may serve to decrease the prevalence of neuromyths.

Bruer, JT. (1998). The brain and child development: time for some critical thinking. Public Health Reports, 113 (5), 388-397.

Howard-Jones, P. (2014). Neuroscience and education: myths and messages Nature Reviews Neuroscience, 15 (12), 817-824 DOI: 10.1038/nrn3817

Autism, SSRIs, and Epidemiology 101

I can understand the eagerness with which science writers jump on stories that deal with new findings about autism spectrum disorders (ASDs). After all, the mystery surrounding the rapid increase in ASD rates over the past 20 years has made any ASD-related study that may offer some clues inherently interesting. Because people are anxiously awaiting some explanation of this medical enigma, it seems like science writers almost have an obligation to discuss new findings concerning the causes of ASD.

The problem with many epidemiological studies involving ASD, however, is that we are still grasping at straws. There seem to be some environmental influences on ASD, but the nature of those influences is, at this point, very unclear. This lack of clarity means that the study of nearly any environmental risk factor starts out having potential legitimacy. And I don't mean that as a criticism of these studies--it's just where we're at in our understanding of the rise in ASD rates. After we account for mundane factors like increases in diagnosis due simply to greater awareness of the disorder, there's a lot left to figure out.

So, with all this in mind, it's understandable (at least in theory) to me why a study published last week in the journal Pediatrics became international news. The study looked at a sample of children that included healthy individuals along with those who had been diagnosed with ASD or another disorder involving delayed development. They asked the mothers of these children about their use of selective serotonin reuptake inhibitors (SSRIs) during pregnancy. 1 in 10 Americans is currently taking an antidepressant, and SSRIs are the most-frequently prescribed type of antidepressant. Thus, SSRIs are administered daily by a significant portion of the population.

Before I tell you what the results of the study were, let me tell you why we should be somewhat cautious in interpreting them. This study is what is known as a case-control study. In a case-control study, investigators identify a group of individuals with a disorder (the cases) and a group of individuals without the disorder (the controls). Then, the researchers employ some method (e.g. interviews, examination of medical records) to find out if the cases and controls were exposed to some potential risk factor in the past. They compare rates of exposure between the two groups and, if more cases than controls had exposure to the risk factor, it allows the researchers to make an argument for this factor as something that may increase the risk of developing the disease/disorder.

If you take any introductory epidemiology (i.e. the study of disease) course, however, you will learn that a case-control study is fraught with limitations. For, even if you find that a particular exposure is frequently associated with a particular disease, you still have no way of knowing if the exposure is causing the disease or if some other factor is really the culprit. For example, in a study done at the University of Pennsylvania in the late 1990s, researchers found that children who slept with nightlights on had a greater risk of nearsightedness when they got older. This case-control study garnered a lot of public attention as parents began to worry that they might be ruining their kids' eyesight by allowing them to use a nightlight. Subsequent studies, however, found that children inherit alleles for nearsightedness from their parents. Nearsighted parents were coincidentally more likely to use nightlights in their children's rooms (probably because it made it easier for the nearsighted parents to see).

A variable that isn't part of the researcher's hypothesis, but still influences a study's results is known as a confounding variable. In the case of the nearsightedness study, the confounding variable was genetics. Case-control studies are done after the fact, and thus experimenters have little control over other influences that may have affected the development of disease. Thus, there are often many confounding influences on relationships detected in case-control studies.

So, a case-control study can't be used to confirm a cause-and-effect connection between an exposure and a disorder or disease. What it can do is provide leads that scientists can then follow up on using a more rigorous experimental design (like a cohort study or randomized trial). Indeed, the scientific literature is replete with case-control studies that ended up being false leads. Sometimes, however, case-control results have been replicated with better designs, leading to important discoveries. This is exactly what happened with early reports examining smoking and lung cancer.

Back to the recent study conducted by Harrington et al. The authors found that SSRI use during the first trimester was more common in mothers whose children went on to develop ASD than in mothers who had children who developed normally. The result was only barely statistically significant. This fact combined with the variability seen in the confidence interval suggests it is not an overly-convincing finding--but it was a finding nonetheless. In addition to an increased risk of ASD, the authors also point out that SSRI exposure during the second and third trimesters was higher among mothers of boys with other developmental delays. Again, however, the effect was just barely statistically significant and even less convincing than the result concerning ASD.

So, the study ended up with some significant results that aren't all that impressive. Regardless, because this was a case-control design, there is little we can conclude from the study. To realize why, think about what other factors women who take SSRIs might have in common. Perhaps one of those influences, and not the SSRI use itself, is what led to an increased risk of ASD. For example, it seems plausible that the factors that make a mother more susceptible to a psychiatric disorder might also play a role in making her child more susceptible to a neurodevelopmental disorder. In fact, a cohort study published last year with a much larger sample size found that, when the influence of the condition women were taking SSRIs for was controlled for, there was no significant association between SSRI use during pregnancy and ASD.

The fact that this case-control study doesn't solve the mystery of ASD isn't a knock on the study itself. If anything, it's a knock on some of the science writing done in response to the study. I can't go so far as to say these types of studies shouldn't be reported on. But, they should be reported on responsibly, and by writers who fully understand and acknowledge their shortcomings. For, it is somewhat misleading to the general public (who likely isn't aware of the limitations of a case-control study) when headlines like this appear: "Study: Moms on antidepressants risk having autistic baby boys."

The safety of SSRI use during pregnancy is still very unclear. But both SSRIs and untreated depression during pregnancy have been linked to negative health outcomes for a child. Thus, using SSRIs during pregnancy is something a woman should discuss at length with her doctor to determine if treatment of the underlying condition poses more of a risk than leaving the condition untreated. In making that decision, however, the barely significant findings from a case-control study should not really be taken into consideration.

 

Rebecca A. Harrington, Li-Ching Lee, Rosa M. Crum, Andrew W. Zimmerman, Irva Hertz-Picciotto (2014). Prenatal SSRI Use and Offspring With Autism Spectrum Disorder or Developmental Delay PEDIATRICS DOI: 10.1542/peds.2013-3406d

Dear CNRS: That mouse study did not "confirm" the neurobiological origin of ADHD in humans

Late last week the French National Centre for Scientific Research (CNRS - the acronym is based on the French translation) put out a press release describing a study conducted through a collaboration between several of its researchers and scientists from The University of Strasbourg. CNRS is a large (30,000+ employees), government-run research institution in France. It is the largest research organization in Europe, and is responsible for about 1/2 of the French scientific papers published annually.

The study in question, conducted by Mathis et al., investigated the role of a brain region called the superior colliculus in disorders of attention. The superior colliculus, also known as the tectum, is part of the brainstem. It is strongly connected to the visual system and is thought to play an important role in redirecting attention to important stimuli in the environment. For example, imagine you are sitting in your favorite coffee shop quietly reading a book, when someone in a gorilla suit barges in and runs through the middle of the room. You would likely be surprised and you would, somewhat reflexively, direct your attention towards the spectacle. This rapid shift in attention would be associated with activity in your superior colliculus.

It has been proposed that individuals who suffer from disorders like attention-deficit hyperactivity disorder (ADHD) may experience increased activity in the superior colliculus, which causes rapid, uncontrolled shifts of attention. Mathis et al. investigated the role of the tectum in attention using mice with a genetic abnormality that makes the superior colliculus hypersensitive.

The researchers exposed mice with this defect to a series of behavioral tests. They found that the mice performed normally on tests of visual acuity, movement, and sensory processing. However, the mice seemed to be less wary than control mice of entering areas of bright light (usually something mice avoid as open spaces make them vulnerable to attacks from natural predators). Additionally, the mice performed worse on a task that required them to inhibit impulses. These abnormalities in behavior were associated with increased levels of the neurotransmitter norepinephrine in the superior colliculus.

The authors of the study mention that their work supports the hypothesis that superior colliculus overstimulation is a contributing factor in ADHD. I have no qualms with the verbiage used in the paper, but CNRS's press release about the study is titled "Confirmation of the neurobiological origin of attention-deficit disorder" and they state in the article: "A study, carried out on mice, has just confirmed the neurobiologial origin of attention-deficit disorder (ADD)..."

When it comes to psychiatric disorders without a clearly defined molecular mechanism (which is almost all of them), it is improbable that a finding in mice can confirm anything in humans. Our understanding ADHD in humans is limited. We have no objective diagnostic criteria; instead we base diagnosis on observable and self-reported symptoms.

If our understanding of a disorder in humans is based primarily on symptomatology (as opposed to the underlying pathophysiology), then it makes the results of experiments that use animals to model the disorder more difficult to interpret. For, if we don't know what molecular changes we can expect to see as a correlate of the disease (e.g. senile plaques in Alzheimer's), then we are resigned to trying to match symptoms of mice with symptoms of men. In this type of situation where we don't know the true pathophysiology, we can never be sure that the symptoms we are seeing in mice and those we are seeing in men have an analogous biological origin.

Thus, when it comes to psychiatric disorders, translating directly from animals to humans is difficult. In the case of ADHD, because the biological origins of the disorder are still mostly unknown, animal models can be used as a means to explore the neurobiology of a similar manifestation of symptoms in the animal. They can't, however, be used to "confirm" anything about the human disorder. In this case, CNRS drastically overstated the importance of the study. Of course, the wording used by CNRS in their initial press release was also found on dozens of other media outlets after they picked up the story.

Do I doubt that ADHD has a neurobiological origin? No. But the study by Mathis et al. did not confirm that it does. CNRS, as an institution of science, should be more careful about the claims they make in their communications with the public.

 

Mathis, C., Savier, E., Bott, J., Clesse, D., Bevins, N., Sage-Ciocca, D., Geiger, K., Gillet, A., Laux-Biehlmann, A., Goumon, Y., Lacaud, A., Lelièvre, V., Kelche, C., Cassel, J., Pfrieger, F., & Reber, M. (2014). Defective response inhibition and collicular noradrenaline enrichment in mice with duplicated retinotopic map in the superior colliculus Brain Structure and Function DOI: 10.1007/s00429-014-0745-5

 

 

Popular science writing and accuracy

This week, an article appeared in the L.A. Times online, and was recycled in the Chicago Tribune and a number of other media sources. It focused on a study that was just published in the Journal of Neuroscience. In the study, Iniguez et al. gave fluoxetine (aka Prozac) to male adolescent mice for 15 days. Three weeks after ending the fluoxetine treatment, the researchers tested the mice on two measures that are purported to assess depression in rodents and one that is a suggested measure of anxiety. They found that the mice previously treated with fluoxetine displayed less “depression-like” behavior, but more “anxiety-like” behavior.

The article in the L.A. Times was titled Prozac during adolescence protects against despair in adulthood. This title is problematic to me for a couple of reasons. First, because another species isn’t mentioned in the title, it is likely on first glance that most readers will assume that the finding occurred in, or was directly relatable to, humans. The author (Melissa Healy) mentions in the second paragraph that the study was done in mice, but continues to draw direct parallels to humans throughout the article. For example, Healy asks, “So how does a medication that treats depression in children and teens -- and that continues to protect them from depression as adults -- also heighten their sensitivity to stress?” when referring to the contradictory findings of increased anxiety and decreased depression-like behavior. The use of this vernacular (e.g. children and teens) makes it sound as if the observance made in the study means the same phenomenon would occur in humans who take Prozac. However, anyone who is familiar with rodent behavior knows that, for every behavior for which we can draw direct parallels to human behavior, there are many others for which the link is much more ambiguous.

Also, it is a bit of stretch to say that we can measure “despair” in mice. The researchers in this study used two tests to measure depression-like behavior: the response to social defeat stress and the forced-swim test. In the social defeat test, a mouse is forced to interact with another very aggressive mouse every day for 10 consecutive days. The aggressive mouse will typically force the experimental mouse into a subordinate position. After repeated exposure to this aggression, the subordinated mice become more submissive, antisocial, and withdrawn - symptoms which are thought to resemble human depression. These depression-like symptoms can be reversed with chronic antidepressant treatment. In the forced-swim test, mice are dropped into a beaker of water from which they cannot escape. Eventually, mice will stop trying to escape and move just enough to keep their head above water; this is interpreted as a form of helplessness, analogous to depression. Numerous experiments have shown, however, that acute treatment with antidepressants causes mice to continue attempting to escape for a longer period of time, which is taken as a sign of decreased depressive-like symptoms.

Do these tests measure depression in a way that is relevant to humans? Maybe, maybe not. There are those who would argue that we should be very cautious making such interpretations. Interestingly, one of the coauthors of the Iniguez et al. study, Eric J. Nestler (a prominent name in psychopharmacology research), wrote a paper in 2010 with Steven E. Hyman that focused on using animal models to understand human psychiatric disorders. The paper emphasized that many animal models are inadequate for making assumptions about human psychiatric conditions. Nestler and Hyman specifically mentioned the forced-swim test as one of two that “...are not models of depression at all. Instead, they are rapid, black-box tests developed decades ago to screen compounds for antidepressant activity.” They go on to say that the assumption that the increased activity in a forced swim test is related to alleviating depressive symptoms is an “...enormous anthropomorphic leap…” and that the test “...has not been convincingly related to pathophysiology.”

So, at least one coauthor in the Iniguez et al. study doesn’t seem to be completely confident that these types of findings are directly relatable to humans. Unfortunately, these doubts aren’t expressed within the Inguinez et al. paper. Instead, the authors use terms like “behavioral despair” when describing rodent behavior and fail to discuss limitations in attempting to relate their findings to a human population.

So, perhaps the author of the L.A. Times article isn’t completely to blame. I could argue that Iniguez et al. also could have been more specific about the implications of their study. However, I focused on the L.A. Times article because I feel like this sort of misinterpretation is rampant in popular science writing. Indeed, I have been guilty of it on this blog. It can often be well-intentioned. At least in my mind, the justification is that increasing interest in the article by the lay public increases interest in science in general, which is the goal of science writing - isn’t it?

Perhaps. It could be considered one goal of popular science writing, although one could argue that discussing science without taking a critical perspective is at odds with the historical tradition of science. Regardless, when speaking about a psychiatric disorder as common as depression, it may be a disservice to spread information that further supports the use of antidepressants. 11% of the American population already take antidepressants, and some studies have suggested that, except in the most severe cases, the difference between taking an antidepressant or placebo is small or negligible. For the superficial reader, the message of this L.A. Times article will clearly be that there are benefits to Prozac that go beyond simply treating depression at the point in time someone takes the drug. The message should, however, be that this study (if replicated) may tell us something about how rodents respond to antidepressants. What that means for humans, until a similar hypothesis is tested in humans, is completely unclear.


Iñiguez SD, Alcantara LF, Warren BL, Riggs LM, Parise EM, Vialou V, Wright KN, Dayrit G, Nieto SJ, Wilkinson MB, Lobo MK, Neve RL, Nestler EJ, Bolaños-Guzmán CA. (2014). Fluoxetine Exposure during Adolescence Alters Responses to Aversive Stimuli in Adulthood Journal of Neuroscience DOI: 10.1523/JNEUROSCI.5725-12.2014