Wednesday, July 30, 2008

Dopamine and the Bruce Effect

ResearchBlogging.org
If you take a recently impregnated female mouse and place her in a cage with an unfamiliar male, something curious often happens. The female, upon smelling the new male's urine, spontaneously aborts the fetus as her body drastically reduces its production of prolactin (PRL), a hormone responsible for progesterone secretion and thus essential to maintaining a pregnancy. The embryo fails to implant and the female begins ovulating again, making her receptive to copulation attempts by the new male. This strange phenomenon was first noticed by biologist Hilda Margaret Bruce in 1959, and is referred to as the Bruce effect.

The Bruce effect has been a curiosity to biologists since its discovery, as many have sought to explain why the female mouse’s body would seemingly be programmed to destroy her own offspring. After all, isn’t reproduction supposed to be the “goal” of evolution, and thus of life?

Several explanations have been offered to make sense of the Bruce effect. One is that it is an adaptive mechanism to protect the female’s potential maternal investment from being lost to infanticide. Infanticide is a fairly common practice among many species, and is usually committed by the male.

A male often cannot visibly determine if a female is pregnant when he encounters her (if her fertilization has been recent). Thus, upon copulating with her, he takes a risk that she may already be pregnant. If she were to produce offspring from another male, he might mistake them for his own and invest his resources in raising them (whatever “raising” may mean in the particular species). The risk being, if they are not his offspring he makes the investment but does not gain the benefit of his genes being passed on to a new generation. This is evolutionary suicide, and some biologists believe the males of many species instinctually go to great lengths to avoid it.

One way to make sure none of one’s resources go to raising another’s offspring is to simply get rid of the offspring. Male mice will frequently be infanticidal for the first three weeks after copulating with a female. Then they act paternally for about two months, after which they regress to their infanticidal tendencies. Coincidentally (not really), the mouse gestation period is three weeks and the weaning period is about two months. So the male times his infanticidal behavior perfectly to ensure that any offspring he helps to wean are his (note again that this is instinctual, not conscious behavior).

So, many biologists have suggested the Bruce effect may be a way for the female to avoid going through a pregnancy and investing all of her resources in it, just to have her progeny killed by a new male. Instead, she can abort the fetus and be receptive to him, in the process ensuring that she will have the opportunity to raise offspring into adulthood.

An alternative explanation for the Bruce effect involves mate selection. In this hypothesis, blocking the pregnancy is beneficial to the female by providing her with a novel mating partner. In highly territorial animals like rodents, a female may be more inclined to mate with the mouse whose urine she can currently smell, as he is most likely dominant in that territory.

Whatever the reason for the effect, a female also seems to reach a point when she has too much invested in the pregnancy already for it to be beneficial to abort. In mice, this occurs after the first few days of pregnancy, when the embryo becomes implanted. After this point, the Bruce effect no longer occurs. This is thought to involve a type of evolutionary weighing of the pros and cons. After three days of pregnancy, the female “decides” she has put enough time into her fetus that it would be counter-productive to start over. She must take the risk.

While the evolutionary cause of the Bruce effect may not be known for some time, a study published in July's Nature Neuroscience brings us closer to understanding the neural mechanism behind it. It seems to be dependent upon the versatile neurotransmitter dopamine.

When the female mouse smells another male’s urine, two sense organs in the nasal cavity are involved in processing the scent. One, called the vomeronasal organ (VNO), has pheremone-sensing capabilities. The other, the main olfactory epithelium (MOE), detects odorants. Both organs project fibers to the main olfactory bulb (MOB) and accessory olfactory bulb (AOB). The MOB contains a large population of dopaminergic interneurons, known as the juxtaglomerular dopaminergic interneurons (JGD).

As these dopamine interneurons are highly involved with olfaction, the scientists involved in the study wondered if they might also play a role in blocking pregnancy through urine odor detection. When they measured dopamine levels in the female mouse brain, they found a surge in dopamine occurred after the third day of the pregnancy—the time at which male odor no longer has an abortive effect on the fetus.

When they administered a dopamine antagonist, which blocks dopamine transmission, spontaneous abortion again occurred, even after implantation on the third day. Therefore, dopamine appears to interfere with the perception of the male urine odors, and is responsible for the suppression of the Bruce effect after the third day of a mouse pregnancy.

These findings represent a new understanding of the roles of the olfactory bulb, implicating it in the control of reproduction and social behavior in rodents. While not really applicable to humans, making sense of the Bruce effect is important in comprehending social behavior that, without knowledge of evolutionary theory, seems otherwise inexplicable.

Reference:

Che Serguera, Viviana Triaca, Jakki Kelly-Barrett, Mumna Al Banchaabouchi, Liliana Minichiello (2008). Increased dopamine after mating impairs olfaction and prevents odor interference with pregnancy Nature Neuroscience, 11 (8), 949-956 DOI: 10.1038/nn.2154

Saturday, July 26, 2008

Gene Therapy for Prion Diseases

ResearchBlogging.org
Prion diseases are relatively rare in humans. The most common, Creutzfeldt-Jakob disease (CJD), afflicts only about one in every million people. Despite their low prevalence, however, these diseases (also known as transmissible spongiform encephalopathies, or TSEs) receive a fair amount of attention from the media and the scientific community. This interest is probably due to their enigmatic mechanism, potential for epidemic spreading, frightening neurodegenerative features, and (as of yet) incurability.

TSEs are neurodegenerative diseases thought to be the result of a prion infection. This distinguishes them from most other sicknesses, which are caused by microbial infections. Prions are infectious agents made up entirely of proteins (the word itself comes from a combination of proteinaceous and infectious).

A prion protein called PrPC (the C stands for cellular, and actually should be in superscript but I can't get Blogger to do this) is commonly present on the membranes of our cells, although its function has not yet been fully resolved. PrPSc (the Sc is for Scrapie, the first identified prion disease—in sheep) is an isoform of PrPC and the toxic form of PrP. When it enters the brain it can cause conformational changes in PrPC, turning it into PrPSc.

PrPSc is extremely resistant to being broken down. Thus, it accumulates in the brain, forming protein aggregates known as amyloid fibers. These are toxic to brain cells, and eventually kill them. Astrocytes, which perform a number of supporting functions in the cell (one of which is cleaning up), find the dead neurons and digest them.

This creates actual holes in the brain, giving it a sponge-like appearance (and the reason for these disorders being referred to as spongiform). This continued neurodegeneration leads to a number of clinical symptoms, like changes in personality, depression, involuntary movements, lack of coordination, dementia, and eventually the complete loss of the ability to move or speak. TSEs are currently incurable, and an effective method of therapeutic treatment has not been found. The aggregation of PrPScs occurs over a long period of time, giving the diseases incubation periods that range from 10-60 years depending on the disease type.

TSEs can be the result of genetic or sporadic (non-genetic) causes. A mutation in the prion protein (PRNP) gene can cause the production of PrPSc instead of PrPC, leading to a prion disease. TSEs are also contagious—not through the air or normal contact, but through exposure to infected tissue, body fluids, or contaminated medical instruments (due to the durability of prions, they can survive normal sterilization procedures).

Unfortunately, we have learned about how TSEs are spread by witnessing several deadly epidemics. Around the middle of the twentieth century, a TSE arose in a New Guinean tribal people called the Fore. It is thought to have spread through cannibalistic ritual practices, and killed over 1,000 of their people. In the 1980s 60 deaths were linked to the transmission of CJD through contaminated medical instruments. Around the same time, 85 people died after receiving prion-infected growth hormone injections. In the 1990s, a type of CJD called variant CJD (vCJD) was linked to eating beef infected with the bovine form of TSE, bovine spongiform encephalitis (BSE), or mad cow’s disease. vCJD has a shorter incubation period than CJD, with the median age at death being 28, versus 68 for CJD. The illness also has a longer duration, with a median of 15 months for vCJD and only 4-5 months for CJD. Up to 200 people worldwide have died from vCJD.

BSE is thought to be caused by feeding cattle the remains of other infected cattle. This practice was stopped in 1989. Due to the long incubation period of the disease, however, some fear that the real mad cow disease epidemic has yet to manifest itself.

An article in PloS One this month addresses a possible way to control such an outbreak, with the successful application of a gene therapy treatment for TSEs. A natural resistance to prion diseases has been discovered in both animals and humans, and specific mutant forms of the mouse Prnp gene have been found to reduce the replication of prions in infected cells.

The researchers involved in the study injected this mutant gene into the brains of mice infected with prions. In order to make the study more relevant to human TSEs, they did this during late stages of the disease, at 80 and 95 days post infection. This increases relevance because, due to the long incubation period of TSEs, most people are unaware they have contracted them until serious symptoms develop.

They found that, after two injections, treated mice survived 20% longer than non-treated mice. They exhibited substantial improvements in behavioral symptoms, as well as a significant reduction of spongiosis and astrocytic activity in the brain.

The authors suggest this effect occurred because the mutated Prpn gene produces a protein that cannot be converted into PrPSc. Additionally, the protein it makes competes with PrPC for PrPSc, slowing the conversion of existing PrPC to the toxic form. Basically, this means that the PrPSc doesn’t realize the new proteins can’t be transformed, and still attaches itself to them. This delays the overall disease progression, as many of these PrPScs are busy trying to make conformational changes to no avail.

These results are promising not only because they slow down the aggregation of toxic prions, but because the effect was demonstrated at such a late stage of disease. Unfortunately, the disease was slowed but not cured. Regardless, the hint of a successful method of treatment for prion diseases might be comforting to nervous meat eaters who are fearing a future vCJD outbreak. I’m a vegetarian (and have been for a long time), so as long as the soybeans in my tofu weren’t grown with meat and bone meal fertilizer, I feel reasonably safe.

Reference:

Karine Toupet, Valérie Compan, Carole Crozet, Chantal Mourton-Gilles, Nadine Mestre-Francés, Françoise Ibos, Pierre Corbeau, Jean-Michel Verdier, Véronique Perrier, Alfred Lewin (2008). Effective Gene Therapy in a Mouse Model of Prion Diseases PLoS ONE, 3 (7), 0- DOI: 10.1371/journal.pone.0002773

Tuesday, July 22, 2008

The Singing Bass: Kitschy or Insightful?

ResearchBlogging.org
If you were listening in on a discussion about the evolutionary origins of language, you might expect to hear theories bandied about concerning evidence for language-like processes in apes. You probably wouldn’t be too shocked to hear someone bring up an example of language in parrots. You might, however, be a little surprised if the conversation turned to the origins of human vocalization in toadfish.

Perhaps this isn’t that surprising, though, when one considers how much of our evolutionary beginnings are shared with fishes. While (of course) fish don’t have language in a human sense, some species do have the ability to make vocalizations in certain situations, like courtship or defense of territory. Although they lack an air tube leading to the mouth, and a larynx to create the vibrational variations more common to land animal utterances, some are able to make noises with an air sac used primarily for buoyancy control and secondary respiration, known as the gas bladder. Fish of the batrachoidid family in particular (i.e. the midshipman and toadfish) have a diverse group of vocalizations. They vary depending on the context, with specific calls for aggression, surprise, or mating (among others).

This leads to a couple of different hypotheses. One is that the ability to vocalize evolved independently a number of times throughout history: in fish, amphibians, reptiles, mammals, and birds. Another is that there is a common origin for the ability to vocalize that can be traced back millions of years to a piscine ancestor. A study published in this week’s Science explores the latter hypothesis by investigating the development of the neural circuitry for vocalization in larval fish.

Studying embryos or larvae is a method used in evolutionary developmental biology. Similarities in the embryonic development of two organisms are considered evidence of a common ancestor. This conclusion is based on the fact that evolution works by the alteration of existing structures. Thus, two related organisms will theoretically have similar embryonic development to a certain point, where it will then diverge in order to form the structures that make the two creatures taxonomically different. A commonly given example of this is the vestigial pharyngeal pouches (gill slits) that human embryos possess early in development.

The authors of the study in Science found that the vocal motor neurons in batrachoidid fish develop in a segmental region that spans the caudal hindbrain and rostral spinal cord. This is similar to the pattern of development found in other vertebrates like frogs and birds. Adult phenotypes seem to indicate a comparable developmental process in reptiles and mammals as well, although embryological studies here are lacking.

The authors conclude that these analogies in the distribution of vocal neurons indicate a conserved developmental pathway that involves Hox gene expression. They suggest this pathway predates the radiation of fish, originating over 400 million years ago. Thus, perhaps the Big Mouth Billy Bass is a more astutely-developed toy than it first appears to be…no, it’s still stupid.

Reference:

Bass, A.H., Gilland, E.H., Baker, R. (2008). Evolutionary Origins for Social Vocalization in a Vertebrate Hindbrain-Spinal Compartment. Science, 321(5887), 417-421. DOI: 10.1126/science.1157632

Friday, July 18, 2008

Mirror Neurons May Be Responsible For Global Warming & U.S. Economic Woes

ResearchBlogging.org
Since their discovery in the 1990s, mirror neurons have experienced a degree of fanfare uncommon to findings in the field of neuroscience. Mirror neurons are so named because they are activated both when a primate participates in a task, and while watching another complete the same task, thus “mirroring” the behavior of the other animal. This unique activation pattern has led some to suggest that mirror neurons are integral not only to imitation, but also to understanding that others have their own mental states (theory of mind). By extension, it has been hypothesized that mirror neurons are necessary for language acquisition and social interaction. Dysfunctions in mirror neurons have even been offered as a possible cause of autism.

Thus, they have come to be viewed as a very special kind of neuron, with a versatility and importance to brain function that is unrivaled by other types of brain cells. But, do mirror neurons deserve the exalted status that some have ascribed to them? In short: probably not.

Mirror neurons do seem the play an interesting role in cognition. Primate studies have found mirror neurons to be activated in correlation with focusing on a particular goal or intention of movement. They are also activated in a selective fashion, with specific groups corresponding to different goals of an action, e.g. grasping to move vs. grasping to eat. Of additional interest, they have been found to respond to sounds associated with an observed action.

fMRI studies in humans have revealed specific activity in areas where mirror neurons are thought to be located, such as the ventral premotor (vPM) and anterior intraparietal sulcus (alPS), during observation and imitation of movement.

But, while these findings in humans and non-human primates are intriguing, they don’t support the rampant speculation that has followed about the role of mirror neurons in overall cognitive function. Experiments with monkeys to date haven’t assessed the ability to imitate, experience empathy, display theory of mind, or use language. Of course, it is debatable to what extent some of these attributes even exist in non-human primates, or if they can be studied if they do.

As for humans, neuroimaging experiments have allowed scientists to determine which regions of the brain are active during imitation or observation of an action. The specific neurons that are utilized, however, and any physiological characteristics that make them unique, cannot be assessed with current imaging technology.

Thus, the roles attributed to mirror neurons in the last decade since their discovery may have been an overly ambitious attempt to describe their function. By extension, implying that their malfunction is critical in autism could really be jumping the gun.

In an essay in last week’s Nature, Antonio Damasio and Kaspar Meyer discuss the exaggerated claims about mirror neurons, and suggest a rational hypothesis for how they may work. Twenty years ago Damasio proposed a theory known as “time-locked multimodal activation” to explain the development of complex memories. The theory is based on the proposed existence of groups of neurons that, during the encoding of memories, receive input from a number of different sites. Damasio termed these neuronal groups convergence-divergence zones (CDZ). He suggested there are two types of CDZs: local CDZs, which collect information from areas close to a sensory cortex (e.g. the visual cortex), and non-local CDZs, which are higher-order structures of the brain where the information from local CDZs converges.

According to this theory, when a memory is formed—Damasio and Meyer use the example of a monkey opening a peanut shell—all the information about the event converges on a non-local CDZ. Then, if the monkey hears a peanut shell opened in the future, this would activate a local auditory CDZ, as well as the non-local CDZ where memories associated with the noise are stored. Signals are sent out from the non-local CDZ to all local CDZs that were involved in the original experience of the event, activating these sites and resulting in a sort of recreation of the original peanut-cracking.

Mirror neurons are represented by the non-local CDZs. In this scenario, however, mirror neurons are not physiologically unique. They are normal neurons involved in a network that has less to do with “mirroring” than with integrating and syncing the various aspects of elaborate memories. This does not take away from the role and function of this network, but should detract a little from the aggrandized status attributed to individual mirror neurons, in favor of an appreciation of the holistic complexity of the brain.

The CDZ hypothesis, however, has not been tested, although research does indicate that networks involved in observing and imitating behavior spread beyond purported mirror neuron sites. Regardless of whether the specifics of the CDZ hypothesis come to be supported by future studies, I feel it represents a more sensible approach to mirror neurons. To credit mirror neurons alone with a function that carries such importance, like the ability to infer the mental states of others, seems to oppose much of what has been learned thus far about neuroscience. We have never found language neurons, love neurons, or fear neurons. Instead we have found networks that spread throughout brain regions that correlate with the ability to experience these aspects of cognition. I suspect we will soon say the same about mirror neuron networks and their involvement in social interaction.

References:

Damasio, A., Meyer, K. (2008). Behind the looking-glass. Nature, 454(7201), 167-168. DOI: 10.1038/454167a

Dinstein, I., Thomas, C., Behrmann, M., & Heeger, D.J. (2008). A mirror up to nature. Current Biology, 18 (1), 13-17.

Monday, July 14, 2008

If I Beat Up a Robot, Will I Feel Remorse?

ResearchBlogging.org
At times, when my computer's performance has transformed it from an essential tool into a source of frustration, I will find myself getting increasingly angry at it. Eventually I may begin cursing it, roughly shoving the keyboard around, violently pressing the reset button, etc. And I can’t help noticing that, during these moments of anger, I have actually begun to blame my computer for the way it is working—as if there were a homunculus inside the machine who had decided that it was a good time to frustrate me and then started fiddling with the wires and circuitry.

I’m sure I’m not alone. Human beings have a general tendency to attribute mental states, or intentionality, to inanimate objects. There could be several reasons for this attribution, known as mentalizing, being a general human strategy that we overuse and mistakenly apply to nonliving things. One is that our knowledge of human behavior is more richly developed than other types of knowledge, due to the early age at which we acquire it and the large role it plays in our lives. Thus, perhaps we are predisposed to turn to this knowledge to interpret actions of any kind, sometimes causing us to anthropomorphize when examining non-human actions.

Another reason may be that assigning intentionality to an action in our environment is the safest and quickest way to interpret it. For example, if one is walking in tall grass and the grass a few feet ahead of them begins rustling, it would be more adaptive to think there is a predator behind that movement than to assume it is just the wind. Someone who decides it is the wind may end up being wrong, and getting killed or injured. Someone who assigns intention to it may also be wrong, but in either scenario has a better chance of being safe because their erroneous conclusion probably resulted in them using a defensive or evasive, instead of a nonchalant, strategy.

One more possible reason for our overuse of Theory of Mind (the understanding that others have their own mental states) may be based on our need for social interaction. Studies have indicated that people who feel socially isolated tend to anthropomorphize to a greater extent. Thus, perhaps part of the reason we assign intentionality so readily is that we have a desire for other intentional agents to be present in our environment, so we can interact with them.

A study published this month in PloS ONE further explores our behavioral and neural responses when we interact with humans and machines that vary in their resemblance to humans. Twenty subjects participated in the study and engaged in a game called prisoner’s dilemma (PD), a contest that has been extensively used in studying social interaction, competition, and cooperation.

PD is so called because it is based on a hypothetical scenario where two men are arrested for involvement in the same crime. The police approach each individual separately and offer him a deal in which he would have to betray the other. The prisoners are faced with the decision to remain silent or to betray their partner. If both stay silent, they face a very minor sentence because the police don’t have enough evidence to make the greater charge stick. If both betray, however, they each face a 10-year sentence. If one betrays and the other remains silent, the betrayer goes free and the silent accomplice receives the full sentence. The game is usually modified to involve repeated situations of cooperate or betray, in which the players can base their decision on the actions of their opponent in the previous round.

In the PloS ONE study, participants played PD against a computer partner (CP) (just a commercial laptop set up across the room from them), a functional robot (FR) consisting of two button-pressing mechanisms with no human form, an anthropomorphic robot (AR) with a human-like shape, hands, and face, and a human partner (HP). Unbeknownst to the participants, the form of their opponent did not have any relationship to the responses given. All were random. The participants’ impression of their partners was gauged after the experiment with a questionnaire. The survey measured how much fun the participants’ reported when playing against each partner, as well as how intelligent and competitive they felt each player to be. Participants indicated that they enjoyed the interactions more the more human-like their partner was. They also rated the partners progressively more intelligent from the least human (CP) up to the most human (HP). They judged that the AR was more competitive than its less-human counterparts, despite the fact that its responses were randomly generated, just as the others were.

The brain activity of the participants during their interactions was also measured using fMRI. Previous research has indicated that mentalizing involves at least two brain regions: the right posterior superior temporal sulcus (pSTS) at the temporo-parietal junction (TPJ), and the medial prefrontal cortex (mPFC). In the present study, these regions were activated during every interaction, but activity increased linearly as the partners became more human-like.

These results indicate that the more a machine resembles a human, the more we may treat it as if it has its own mental state. This doesn’t seem to be surprising, but I guess what intrigued me more about the study was that there was activity in the mentalizing areas of the brain even during the interaction with the CP, as compared to controls. The activity also increased significantly with each new partner, even when the increase in human likeness was minimal (see picture of the partners above). These examples are evidence of our proclivity to mentalize, as even a slight indication of responsiveness by an object in our environment makes us more inclined to treat it as a conscious entity.

The authors of the study point out that these results may be even more significant when robots become a larger part of our lives. If the frustration I experience with my computer is any indication, I foresee human on robot violence being an epidemic by the year 2050.

Reference:

Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T., Robertson, E. (2008). Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE, 3(7), e2597. DOI: 10.1371/journal.pone.0002597

Friday, July 11, 2008

Computational Neuroscience and Systems Biology

ResearchBlogging.org
In 1952, Alan Hodgkin and Andrew Huxley published a paper that was the result of several years of experimentation on the axon of the giant squid. They had been measuring action potentials, a task made easier in the giant squid due to the large diameter of its axons (up to 1mm, compared to 1 micrometer, or millionth of a meter, in humans). Using a device (known as a voltage clamp) that allowed them to manipulate the voltage of the axon membrane and measure the resultant current that flowed through its ion channels, they developed a mathematical model that could be used to calculate current flow across excitable membranes. They won the Nobel Prize in 1963 for their work, and amazingly their equations are still used today in their original form.

This mathematical modeling of neuronal function might be considered the first historical step in the creation of a field that is known today as computational neuroscience. Computational neuroscience involves the translation of brain function into quantifiable models. This usually necessitates drawing from a number of different fields, such as neuroscience, cognitive psychology, electrophysiology, mathematics, and computer programming.

A recent article in PloS Computational Biology summarizes the history of computational neuroscience and examines the interaction of the field with another: systems biology. Systems biology is an approach to studying biology that emphasizes looking at a biological system as a whole. This is in contrast to a reductionist methodology, which involves breaking something down into its constituent parts in order to understand how it functions.

Biology has had to rely on reductionism for much of its history, simply because there has not been enough information to understand whole systems. Now, however, sub-fields like genomics and proteomics have led to drastic gains in the extensiveness of our knowledge of biological processes, allowing complex computational modeling of biological systems to occur for the first time.

As pointed out in the PloS article, however, these two fields that use computational methods to explore neuroscience and biology, respectively, are distinctly separate from one another, and have little interaction or overlap. Why is this?

One reason is that the information available to systems biology is much more comprehensive. Data like an entire genome is accessible to use in computational modeling. Neuroscience, on the other hand, usually has to take a more theoretical approach. For example, computational neuroscientists do a lot of work with neural network models. These models, however, are usually general examples and don’t attempt to mimic specific networks in the brain. At this point, accurate modeling of distinct networks is a little too ambitious of an endeavor. The disparity in the information available to the two fields has led to differences in methods and tools (e.g. the software used for modeling), which make integration of the areas even more difficult.

It seems, however, that this chasm between computational neuroscience and systems biology will eventually be abolished. At this point it may be unavoidable, as knowledge of neuroscience lags behind that of other biological areas for various reasons that range from the complexity of the brain to the history of our philosophical approach to studying it. But the understanding of biological processes like gene expression and protein synthesis that makes systems biology capable of large-scale modeling attempts will eventually lead to an improved elucidation of how the brain works. This will inevitably allow for the integration of computational neuroscience and systems biology. After all, the brain is a pretty important part of the overall system.

Reference:

De Schutter, E., Friston, K.J. (2008). Why Are Computational Neuroscience and Systems Biology So Separate?. PLoS Computational Biology, 4(5), e1000078. DOI: 10.1371/journal.pcbi.1000078

Thursday, July 10, 2008

Foreign Accent Syndrome

ResearchBlogging.org
Watching someone you know recover from a stroke or other serious brain insult can be extremely difficult. Cognitive deficits (including dementia), apraxia, and speech problems are among the disabilities that these patients may have to endure. At times, these impairments can make it hard to find the individual you knew before the incident in the post-accident patient. This is perhaps the most trying aspect of the experience.

Well imagine if, when you attempt to speak to a stroke survivor you knew before the stroke, you find not that they have difficulty producing speech, but that they have strangely adopted a new British (or German, Dutch, etc.) accent. While it may seem to be the lesser of two evils, you can certainly envision that it might be also be a disconcerting experience (for all parties involved).

This rare (but real) disorder is known as foreign accent syndrome. It occurs after a severe brain injury or stroke. The patient develops an abnormality of speech that seems, to most listeners, to resemble a foreign accent. A recent case, one of the first in Canada, involved a woman who had a stroke, then adopted an accent that sounded like Maritime Canadian English—a dialect the woman was previously unfamiliar with.

What exactly is going on here? At first an enigma, recent investigations into foreign accent syndrome have begun to shed some light on the mechanisms underlying the problem. According to a review article in the Journal of Neurolinguistics, “foreign accent syndrome” is actually something of a misnomer, as patients do not demonstrate a speech pattern that consistently corresponds to a particular foreign accent. Instead, they display general changes in linguistic prosody that listeners mistakenly attribute to a different dialect.

Prosody is the rhythm, stress, and intonation of speech, and disruption has an effect on overall speaking ability, but is particularly problematic to vowel production, pitch, and syllable stressing. According to the review, phoneticians who have listened to foreign accent syndrome patients have asserted that their speech doesn’t consistently resemble a foreign dialect. Instead, it fluctuates in its similarity to various languages, and even to different families of languages. Thus, the foreign accent syndrome tag may be a simplification.

It is not a surprise to learn that most cases of foreign accent syndrome appear to be associated with lesions to the left hemisphere of the brain, which is typically correlated with language. Patients usually have damage to Broca’s area, the motor strip adjacent and inferior to this region, and/or the middle frontal gyrus. Details beyond these general areas are scarce, however, leaving the specific neural basis of the syndrome largely unknown.

Probably the important take-home message at this point is that the syndrome doesn’t involve the mysterious acquisition of a foreign accent. Instead, it is a general affliction of speech that causes distortions in prosody, which are interpreted as foreign dialects by listeners. All in all, it is perhaps one of the less debilitating effects of brain injury/stroke. Regardless, one can imagine the upset it must cause at an already difficult time. Perhaps some of that distress will be assuaged in new patients by an improved understanding of the syndrome.

Reference:

BLUMSTEIN, S., KUROWSKI, K. (2006). The foreign accent syndrome: A perspective. Journal of Neurolinguistics, 19(5), 346-355. DOI: 10.1016/j.jneuroling.2006.03.003

Monday, July 7, 2008

Encephalon #49 Celebrates Independence! (from Lamarckism)

The first week in July is a time of great significance—one that reminds us of change, revolution, and how an age-old view of the world can be drastically altered by the persistent belief in one's own novel ideas. This, of course, is because July 1st marks the anniversary of the presentation of Charles Darwin and Alfred Russel Wallace's independently-developed theories of evolution and natural selection to the Linnean Society of London. The reading represented the first public explanation of the theory, which would eventually come to be the foundation upon which the world of science rests. Last week's anniversary is a special one, marking the 150th year since the publication. So, as you peruse Encephalon's round-up of the latest and greatest neuroscience blogging, remember that all roads in science today lead back to July 1st, 1858, at the Linnean Society.

Darwin developed his theory without the luxury of knowing anything about genes. Brain Stimulant skeptically discusses the use of gene therapy in psychiatry. Just the ability to have the conversation, however, is representative of how far our understanding of genetics has come in 150 years.

Mind Hacks questions whether our rapidly improved methods of imaging the brain are resulting in sensationalized reporting on experimental results.

Cognitive Daily describes what makes voices more or less attractive.

Neurophilosophy provides an intriguing look at the brain's ability to reorganize itself after a cerebrovascular accident. It appears that cells in the brain are much more versatile than once thought.

Neuroanthropology looks at the relationship between music and movement, and how it is manifested in the drumming that accompanies Sundanese martial arts demonstrations, as well as gives us a rational perspective on the debate over the biological origins of homosexuality, and a good reason to relax.

Sharp Brains discusses how physical exercise and mental exercise compare in improving the health of your brain. An interview with Dr. Art Kramer from the University of Illinois provides a way to engage in both forms of exercise at the same time: walking book clubs! And, tests to determine "brain age" are called into question.

The Winding Path provides an in-depth description of the co-evolution of the brain and culture, and then focuses specifically on the evolution of complex social interaction.

Brain Blogger asks if religion has neural or supernatural roots, and looks at the ramifications of implanting microchips containing medical information in patients.

Finally, The Neurocritic takes Science to task for not practicing what it preaches about fMRI translation. He also summarizes a unique review article that uses Dilbert cartoons to help explain the neural correlates of prediction error signals.

Be sure to check out the next edition of Encephalon at Sharp Brains on July 21st. Send your submissions to encephalon{dot}host{at}gmail{dot}com.

Thursday, July 3, 2008

Bisexuality in Drosophila

ResearchBlogging.org
The fruit fly, like many organisms, has a stereotypical courtship ritual that precedes mating. After noticing a female, a male fly will follow her with a persistence that is strangely reminiscent to me of behavior that can be observed in any local pub on a busy night. The male will then tap the female with his foreleg, which allows him to sense her pheromones through chemoreceptors on his leg, and verify whether she is sexually receptive. If so, he will extend one wing and vibrate it, producing a species-specific courtship song. He also licks her genitalia to further test her pheromones. Of course these last few steps aren’t as noticeable at the local bar, and if they are you may be in the wrong place (perhaps a strange fetish pub). If she doesn’t reject him, he mounts her and attempts to copulate.

See the ritual here:



A fruit fly’s ability to discriminate between males and females is based on visual, auditory, and chemical cues, such as the pheromones 7-tricosene and cis-vaccenyl acetate (cVA). Flies that don’t produce these pheromones are deemed female and courted by other males. Mutant flies that cannot sense the pheromones attempt to copulate indiscriminately with males and females. Normally, however, homosexual behavior in drosophila is relatively rare.

Earlier this year, a joint research team from France and America set out to determine what the biological difference between bisexual and heterosexual flies is. Is it that bisexual flies have difficulty sensing pheromones like 7-tricosone and cVA, or that they are sense the pheromones and are attracted to the opposite sex? What is the mechanism that causes that difference in attraction?

The group identified a mutation in drosophila that drastically increased homosexual encounters. They named it genderblind (gb) due to the resulting phenotype, which exhibited bisexual behavior. They determined, using an immunoblot, that the gb mutation causes a reduction in gb protein quantity. An immunoblot is also known as a western blot, and involves separating proteins with gel electrophoresis and then probing for specific proteins with antibodies that have been raised against them (presence of the protein will invoke an antibody response).

In order to determine if homosexual behavior in flies was simply a result of the misinterpretation of sensory cues, the group manipulated visual and chemosensory cues and measured fly response. They found that, although reducing the availability of visual cues affects the ability of the fly to discriminate between sexes, it was not enough of an effect to explain gb behavior. When they exposed the gb flies to mutant males that did not produce 7-tricosene and cVA, homosexual behavior was reduced to wild-type levels. When they applied these pheromones topically to the mutants, however, homosexual behavior from the gb flies was restored. This suggested that gb flies sense the pheromones, but interpret them differently than wild-type flies.

The group was able to identify the genderblind protein as a glial amino-acid transporter subunit and a regulator of glutamate in the central nervous system (CNS) of the fly. One function of glutamate is to reduce the strength of glutamatergic synapses through desensitization. The gb mutants had reduced genderblind protein levels and lower levels of extracellular glutamate. This resulted in increased glutamatergic synapse strength in the CNS. A glutamate antagonist administered to gb flies caused them to revert back to wild-type sexual behavior, indicating that the stimulation of glutamatergic circuits is responsible for the homosexual behavior. Additionally, inducing the overexpression of glutamate in the CNS of the fly caused an increase in homosexual behavior in both gb and wild-type flies.

Amazingly, the homosexual behavior could basically be turned on or off by manipulating glutamate transmission. The researchers suggest that this implies there is a physiological model for drosophila sexuality in which flies are pre-wired for both heterosexual and homosexual behavior. The homosexual behavior, however, is normally suppressed by genderblind proteins. A similar model has been proposed for mice.

So, the natural question is: what, if anything, does this say about homosexuality or bisexuality in humans? Well, the authors of the study state that genderblind has a high homology to a mammalian protein, the xCT protein. This is a cystine/glutamate transporter and may be an important regulator of glutamate in the CNS, similar to genderblind in the fly.

Despite this similarity, however, in my opinion it is improbable that a relationship between xCT protein levels and bisexuality/homosexuality that is similar to the one in drosophila and genderblind protein exists in humans. This isn’t to say there couldn’t be a correlation, just that the direct connection seen in fruit flies would appear too simple to be a basis for human sexual orientation, which is probably governed by a number of gene-protein relationships. So, while glutamate levels could play a part in suppressing homosexual behavior, they probably couldn’t act like a “bisexuality-switch” they way they do in the fruit fly.

Reference:

Grosjean, Y., Grillet, M., Augustin, H., Ferveur, J., Featherstone, D.E. (2008). A glial amino-acid transporter controls synapse strength and courtship in Drosophila. Nature Neuroscience, 11(1), 54-61. DOI: 10.1038/nn2019

Wednesday, July 2, 2008

Send in Submissions for Encephalon #49!

Encephalon #49 will be at Neuroscientifically Challenged on Monday, July 7th. Please send your posts in by 6pm on Sunday the 6th to encephalon {dot} host {at} gmail {dot} com.