Wednesday, February 27, 2008

Good News for Ugly Babies

ResearchBlogging.org

Babies really have it made. They usually have at least one, and sometimes a coterie, of people in their life devoted to figuring out exactly what will make them happy, whether it be food, milk, a pacifier, etc. They also have the privilege of enjoying a warm, cooing welcome from almost anyone they encounter, be it a close relative or complete stranger. Not many of us have the ability to turn away from a smiling baby with cold indifference, and some will stop whatever they are doing just to walk over to tell the infant how cute he/she is.

Charles Darwin made the first scientific attempt at explaining the affinity most people have for babies. He suggested it involves an evolutionarily adaptive mechanism. Babies are the evolutionary goal of procreation realized. Considering the biological investment made in bearing a child, along with its individual helplessness, it would be adaptive for a species to be inclined to treat their young with a caring hand. Konrad Lorenz, a pioneer in explaining instinctive behavior, further elucidated on this idea, suggesting there are specific aspects of an infant’s facial features that automatically elicit a parental response, even from a non-parent.

Neuroimaging studies have indicated that parents do show increased activity in areas of the brain associated with rewarding events (nucleus accumbens, anterior cingulate, amygdala) when they see an infant’s face, even if it is not their own child. People who are already parents may be naturally more inclined toward positive feelings when seeing a baby’s face, however, as it could cause them to generate a pleasing comparison to their own child, or even just stem from their familiarity with infantile features. To determine if a predilection for infants is a universal trait, participants who aren’t parents would need to be included in such a study.

Recently, a group of researchers did just that, conducting a study that involved both parents and non-parents. They used magnetoencephalography, an imaging technique that measures magnetic fields produced by the brain’s electrical patterns (quite possibly the future of neuroimaging), to image brain activity while participants viewed unfamiliar adult and infant faces (interspersed with other symbols). The faces were closely matched in expression and attractiveness to prevent these characteristics from playing a confounding role in the study.

They found that when infant faces were viewed, before normal activity in the brain associated with seeing a human face occurred (in an area called the fusiform face area), there was a surge of activity in the medial orbitofrontal cortex. The medial orbitofrontal cortex has been implicated in a number of previous studies in the perception of rewarding stimuli. This activity occurred only when viewing infant faces, and had an extremely rapid onset—about 130 ms after seeing the face. The speed of the response indicates it was probably non-conscious.

This finding seems to add support to Darwin and Lorenz’s theories of an instinctual preference for the features of infants. The authors of the study note it also may have some clinical importance, specifically in cases of postnatal depression. One of the most troublesome symptoms of postnatal depression is the tendency a mother can acquire to be unresponsive to her child. This coldness sometimes makes a crying infant even more uneasy instead of being pacified when their mother approaches. Links between depression and the cingulate cortex have been suggested, and the cingulate cortex is strongly connected to the medial orbitofrontal cortex.

The researchers plan to do follow-up studies to investigate if differences in levels of parenting experience, gender, or specific infant features might affect this reaction. But the indication that the initial response seems to be non-conscious implies there may be a neural reward mechanism in place that is specific to seeing an infant. It is easy to understand why such a trait would be adaptive for a parent to have, as the more solicitous parents are toward their offspring the better their progeny’s chances of survival. It also makes sense that the trait would become widespread, as in tribal groups kin selection could play a large role in making infant survival important. Thus, it could have eventually become a response almost all people, parent and non-parent alike, shared. I suppose no one should be surprised that another concept espoused by Darwin may one day help us to better understand human nature.

Reference:

Kringelbach, M.L., Lehtonen, A., Squire, S., Harvey, A.G., Craske, M.G., Holliday, I.E., Green, A.L., Aziz, T.Z., Hansen, P.C., Cornelissen, P.L., Stein, A., Fitch, T. (2008). A Specific and Rapid Neural Signature for Parental Instinct. PLoS ONE, 3(2), e1664. DOI: 10.1371/journal.pone.0001664

Monday, February 25, 2008

Understanding Memory at the Molecular Level

ResearchBlogging.org

Probably the most extensively researched facet of cognition, memory has proven to be a process that is as difficult to unravel as it is essential to the human experience. Monumental developments in memory research are occurring regularly, however, although they are often under the public radar as they are only pieces of a puzzle we are still incapable of fully assembling. Regardless, the work being done on these pieces will one day allow for an understanding of memory so extensive it will seem to have little in common with our traditional conceptions of what memory is.

An important share of that work is being done at The Scripps Research Institute. A group of researchers there have been focusing their studies on the molecular mechanisms of memory. Last year, they developed a transgenic mouse with genes that cause neurons activated within a particular timeframe to be tagged with a fluorescent marker. Using this mouse, the group demonstrated that the same neurons used during the learning of a fear response are activated during retrieval of the memory. The number of neurons activated also correlated with the intensity of the response, indicating a steady relationship between experience and neural representation. This work helped to elucidate the structure of neural networks involved in memory consolidation and retrieval.

The group then turned their attention to the mechanism involved in long-term memory formation. Receptors for glutamate, the primary excitatory neurotransmitter in the central nervous system, are essential for long-term potentiation (LTP), which is the enhancement of synaptic communication thought to underlie long-term memory formation. It has been suggested that LTP may be the result of the integration of AMPA glutamate receptors (AMPARs) into the synapse that is strengthened. The additional receptors would make the neuron more sensitive to glutamate, leading to LTP.

It has been shown that protein synthesis in neuronal cell bodies is necessary for consolidating memories as well. Thus, it was hypothesized that proteins made in the soma (cell body) are involved in the insertion of AMPARs into synapses associated with memory, resulting in quicker transmission at these synapses and the capacity for memory recall. What has been unclear, however, is exactly how the proteins synthesized in the soma cause plasticity to occur only at the specific synapses associated with a memory.

A popular explanation for this process involves something called synaptic tagging. In this scenario, neuronal stimulation causes the creation of a synaptic tag, a kind of signpost on the neuron that is used to attract proteins necessary for plasticity (such as proteins involved in forming AMPARs). This tagging would occur only at synapses involved in processing the LTP-inducing stimuli, and thus could account for the localization of memory to specific groups of neurons.

The researchers at The Scripps Institute, Naoki Matsuo, Leon Reijmers, and Mark Mayford, again used transgenic mice, this time to investigate the concept of synaptic tagging. They engineered mice to express a subunit of the AMPAR, referred to as GluR1, fused to a green fluorescent protein. They suppressed expression of this gene through the use of doxycycline until the experiment. They then exposed some of the mice to a fear-conditioning program, where they learned to associate foot shock with a particular environment. GluR1 is necessary for the formation of AMPARs. Thus, new AMPAR formation was measurable after the fear conditioning by examining dendritic spines of hippocampal brain slices for fluorescence.

Dendritic spines are regions that protrude from dendrites, where synapses are located and input from other neurons received. There are three morphological types: thin, stubby, and mushroom-shaped.

As they expected, the researchers found an increased proportion of fluorescent GluR1 subunits on the dendrites of hippocampal neurons in those mice that underwent the fear conditioning. Specifically, the fluorescent GluR1 was found on the mushroom-shaped spines, and not in significant amounts on the other spines. It seems a mechanism that resembles synaptic tagging plays a role in directing GluR1-containing AMPARs to mushroom spines.

This study is important for a number of reasons. Understanding the morphological changes that result in LTP is necessary in developing a working physiological model of memory. This study provides insight into the mechanism of these changes and makes our understanding of the memory process a little more concrete. In addition, synaptic modifications that lead to behavioral changes are thought to underlie a number of human tendencies. The successful use of transgenes in this memory study could thus provide a basis for their use in studying these other behaviors affected by neuronal enhancement, e.g. addiction.

Reference:

Matsuo, N., Reijmers, L., Mayford, M. (2008). Spine-Type-Specific Recruitment of Newly Synthesized AMPA Receptors with Learning. Science, 319, 1104-1107. DOI: 10.1126/science.1149967


Friday, February 22, 2008

Why Some People Didn't Give Up on Gene Therapy

Gene therapy, a treatment for disease that involves the insertion of healthy or disease-fighting genes into a person's cells, has undergone something of a roller coaster ride of public approval. Although hotly contested from its inception due to suggested ethical and methodological flaws, its first use in 1990 on four-year old Ashanti DeSilva to alleviate the symptoms of a rare immune disorder seemed successful. The next decade, however, instead of being one of vindication for gene therapy advocates, was fraught with disappointment due both to technical problems with its application, and to grossly unethical handling of its use in people.

In 1999, Jesse Gelsinger, an eighteen-year old with a genetically inherited liver disease, died in a clinical trial for gene therapy. Subsequent investigations found that the researchers involved in the study acted unscrupulously in a number of ways, such as not reporting serious side effects experienced by other patients and not disclosing that monkeys had died after similar treatment in previous studies. This provided more ammunition for opponents of gene therapy. Then, in a clinical trial in the beginning of this decade two children developed leukemia-like symptoms, leading to a temporary ban on clinical trials for the method.

Clinical trials were resumed, but as you can see scientists studying gene therapy have had many obstacles to overcome in getting people to focus on the potential of the treatment, and the successes it has had. Thus, news coming from researchers at the Cedars-Sinai Medical Center will likely be reason for gene therapy proponents to celebrate.

The group recently completed a study using gene therapy in rats to attack glioblastoma (GBM), the most common type of brain cancer. GBM is especially lethal, with a prognosis of six to twelve months of life after diagnosis. It is also extremely hard to treat, as, by the time it is diagnosed it has usually spread to other brain areas, making it difficult to surgically remove a tumor without leaving cancer cells behind. The blood-brain barrier, a safety mechanism that prevents toxins in the blood from entering the brain, stops chemotherapeutic agents from getting to tumor cells in effective amounts. Dendritic cells, which are essential to the immune process as they present foreign antigens on their cell surface to stimulate the immune response, do not naturally occur in the brain. Thus, the tumor in GBM grows unchecked by the immune system. This growth also usually causes behavioral changes and cognitive deficits as the tumor affects other areas of the brain.

The scientists at Cedars-Sinai, using a viral vector (a virus used as a vehicle to carry genetic material), sent two proteins to the cancer cells in the rats’ brains. One of the proteins attracted dendritic cells to the brain, which resulted in an immune response that attacked the cancer cells. Another protein, when combined with an antiviral medication (ganciclovir), also killed tumor cells.

The study found this treatment increased survival to about 70%. In addition, any behavioral or cognitive deficits that were caused by the growing tumors disappeared after the tumor was destroyed. The ability to generate an immune response to the cancer cells was also retained by the rat's immune system. Thus, when cancer cells were re-introduced later on, the rat's immune system was able to kill them off on its own.

Obviously, the implications of having a successful treatment for this type of tumor are tremendous, not only for the management of brain cancer, but also for cancer research in general. Phase I clinical trials for the therapy have been scheduled to begin this year. Unfortunately, even if all goes well in clinical trials, this treatment is still years away from being utilized in practice due to the structure of FDA guidelines for drug development. It is, however, a new bright spot on the horizon for gene therapy. It should also be cause for excitement for everyone, gene therapy advocate or not, as it may represent a crucial step toward treating cancer, one disease that has often eluded even our best efforts to do so in the past.

Thursday, February 21, 2008

Encephalon Blog Carnival

For those of you who don't already know, the revival of Encephalon is complete and posted at Sharp Brains. The carnival is a collection of neuroscience and psychology postings from 24 bloggers (including one from yours truly). Check it out if you haven't already!

Tuesday, February 19, 2008

Human Flocking Behavior with a Shaky Segue into Mirror Neurons

On occasion, I will be in a public place like an airport, sports stadium, or bar/club, and I’ll pause to look at the sea of people that I’m part of. I then usually start to feel being human is a little less significant than we are inclined to think it is, as I get caught up making zoologically comparative observations. In the case of the airport or large event, I often consider how we resemble herds of cattle, moving in one direction or another with the urging of signs or velvet ropes instead of sheepdogs (or sometimes even with security guards barking at us much like a sheepdog would). When at a bar or club, I’m prone to make comparisons to the courtship displays of various animals, like the ostentatious demonstration of the male peacock, or the male fruit fly’s persistent pursuit of a mate.

A group of scientists at the University of Leeds recently published a study that inevitably leads one to similar comparisons. It focuses on flock-like behavior in humans. The researchers conducted their experiments in a large hall with groups of people that varied in size. A number of people in the group were given specific directions about which walking route to follow, the rest were left uninformed, to amble about on their own. They also weren't told that directions were given out to anyone. The participants in the group weren’t allowed to communicate with one another, by speech or gesture.

The study found that the uninformed individuals tended to follow those who had been given directions, even though they were a comparative few to the many. In fact, the researchers observed that as the number of people in the group was increased, the less informed participants were needed to create a following. The largest groups of 200 or more only needed about 10 people walking in a specific direction to cause the rest of the group to fall in behind them. The followers, when interviewed afterward, often didn’t seem to be aware they were being led.

You might be wondering what this all has to do with neuroscience and the answer is: no one knows. This was a behavioral study, and the authors didn’t speculate on the brain regions that might be involved. Although suggesting their involvement in this type of flocking behavior is purely speculative, it seems like as good a time as any to bring up the subject of mirror neurons.

Mirror neurons were first discovered in experiments with macaque monkeys. A group of researchers led by Giacomo Rizzolatti were using electrodes to measure brain activity while the monkeys engaged in motor tasks, like picking up pieces of food. They unexpectedly found that some of the neurons were activated not only when the monkeys picked up the food, but also when they saw someone else (experimenter or another monkey) pick it up. After this finding, areas were identified in the human brain that may play a similar role. They include regions of the inferior frontal cortex and parietal lobe.

Since this discovery much excitement has surrounded the concept of mirror neurons. As these neurons seem to be specifically activated when watching another person perform a goal-directed action, some have suggested they may underlie our ability to understand that other people have intentions. This could mean they are the foundation for empathy, imitation, and communication. Some studies have even indicated malfunctioning mirror neurons may contribute to autism.

The truth is (as is usually the case with the brain), mirror neurons aren’t going to be that easy to figure out. Nor will they be a magic bullet that will conveniently explain a panoply of human behavior. They are part of a complex system that we have a very vague understanding of at this point. We can only speculate about their involvement in much larger reactions like empathy and language. They do seem to play a role in perceiving intention, however, and thus have an intriguing potential when it comes to understanding human behavior, as so much of it is based on knowing that we are surrounded by other intentional agents.

So, to get back to the flocking study, it doesn’t have a specific connection to neuroscience—yet. Just remember, however, if you are standing in line among hundreds to get to your gate at the airport, or being herded through the turnstile at the sports stadium, or observing the human courtship rituals at a club with a sense of detachment, and these sights seem surreal or slightly dehumanizing to the comparative biologist in you, and you begin thinking how similar we are to cattle, or sheep, you may not be far off base—and you’re not alone.

Sunday, February 17, 2008

Sisyphus and Science, or History Repeats Itself

Researchers working at the Harvard Stem Cell Institute (HSCI) published a paper online last week in Cell Stem Cell discussing advances they’ve made in trying to coax adult cells to revert to embryonic stem cell-like states, without viruses or oncogenes (cancer-causing genes). They have outlined the molecular process involved in this nuclear reprogramming, something which up until now has been a very nebulous sequence of events. Being able to reprogram cells without viruses or oncogenes is crucial, as their involvement prohibits the use of the resultant embryonic stem cells (ESC) in humans.

While it is important to understand this process, a great amount of time and research money is being spent trying to convert adult cells to ESCs when there are somewhere around ½ a million frozen embryos sitting in fertility clinics around the country. When a woman undergoes in vitro fertilization (IVF), several embryos are created from the fertilization process. After a few days, the embryos are inspected and the healthiest few are selected for transfer (the actual number transferred varies with the age of the patient and the laws of the country where the procedure is done). The patient can then decide what to do with the remaining embryos: freeze (cryopreserve) them, donate them to research, or dispose of them. Many patients, thinking of the potential for life (or for future IVF procedures) the blastocysts possess, have an understandably difficult time making the decision to donate them to research or have them disposed of (for an interesting article on the difficulty of this decision, go here). Thus, the embryos are frozen and there they stay, sometimes indefinitely.

But, due to George W. Bush’s fanatical opposition of stem cell legislation, scientists can’t get research funding from the government to use even those frozen embryos that patients have chosen to donate to science. They remain untouched, alongside the hundreds of thousands of others, as the top scientists in the country try to figure out ways to make ESCs out of adult cells.

Ironically, IVF itself was the focus of political and ethical debates for years, attacked with the same arguments being used against ESC research. Now, however, it is a commonly accepted practice. And, while IVF provides infertile couples or women with the ability to have children—an amazing blessing for these people—ESCs have potential to be used in the treatment of any disease that involves the degradation of tissue. This would include Parkinson’s, Alzheimer’s, type I diabetes, spinal cord injuries, or stroke (to name a few). Thus, ESCs might be able to provide some blessings of their own.

While the advancements of the researchers at HSCI are great, I can’t help but wonder what type of developments we would be seeing if scientists didn’t have to focus on this hurdle of turning adult cells into ESCs, when there are hundreds of potential ESC lines just waiting out there to be created from frozen embryos. It’s like being tied down to a chair in the middle of the grocery store and dying of starvation.

Friday, February 15, 2008

Can Neuroscience and Free Will Coexist?

ResearchBlogging.org
Studying neuroscience involves dissecting individual behaviors and separating them into their biological components. For example, imagine yourself sitting in front of the television as dinner time is nearing. You grow hungrier as you wait for the show you are watching to come to end, then when it does you get up and go to the kitchen to make something to eat. If an interviewer were to later ask you why you got up to eat at that moment, you might reply “I was hungry, so I decided to have dinner”.

From a neuroscience standpoint, the answer might be a little more complex. As the food you ate for lunch became completely digested, the glucose and insulin levels in your blood began to fall. The lower blood insulin level was detected by your hypothalamus, which sent signals to various cortical areas (a vaguely understood process) providing the impetus to obtain food. The cortex then activated the basal ganglia, leading to the initiation of a motor movement (through the corticospinal tract), which carried you to your refrigerator.

A glaring difference between the two explanations for your behavior is that one involves choice, while the other consists of the perfunctory satisfaction of a biological drive. Do you decide it is time to eat, or do you feel a biological urge to consume food since your blood glucose levels have fallen and your body is in need of replenishment? Deciding to eat is reminiscent of a human action motivated by free will, while being biologically driven to replenish energy stores is suggestive of automatic, reflexive behavior we are usually more comfortable ascribing to drosophila or lab rats.

Of course in this situation the truth seems to lie somewhere in between. You didn’t have to get up to eat at that second (you weren’t starving), you chose to. At the same time it was due in part to your need to satisfy a biological drive. Other questions about neuroscience and choice, however, can get a little more difficult. If genetic influences and brain aberrations underlie drug use and addiction, how constrained by their biological makeup is someone in making a decision to abstain from using drugs if they are exposed to them? Suppose genetic influences can be shown to lead to biochemical effects that strongly incline someone towards aggressiveness, shyness, studiousness, self-restraint, risk-taking, etc.? How much free will do we have when it is impinged upon by genes, neurons, neurotransmitters, and so on?

Roy F. Baumeister addresses the issue of free will in the context of present-day biology and psychology in a recent article in Perspectives on Psychological Science. Baumeister affirms that nonconscious processes play a prodigious role in behavior. He also, though, acknowledges our ability to choose within the boundaries of those automatic processes. The scenario above is an example of this type of choice. You delay the initiation of your movement to the kitchen until you have finished watching your television show. While not deterministic in its definition, however, this is a very limited version of free will. It is not so much a question of "if" you will go to the kitchen, but "when". The outer realm of possibility might involve you ordering pizza instead of microwaving leftovers, but the predominant goal of obtaining food will be achieved somehow, within a reasonable amount of time.

Baumeister suggests this circumscribed free will is the product of natural selection. He postulates that the primary driving force behind human social evolution was cultural adeptness. This would include an array of skills and behavior, such as the ability to comply with norms, follow rules, act morally, postpone gratification, and pursue long-term goals. The evolution of self-control, Baumeister hypothesizes, might be the first step in selection for cultural competence. As human culture developed, it became more saturated with information. Intelligent choice (free will) may have been selected for as a necessary means to process culturally relevant information, and act on it in a manner compatible with social mores. For example, the ability to curb your hunger may seem trivial when it involves waiting until the end of a television show. It could have been important, though, in a hunter-gatherer society where food was scarce. Here it might have meant restraining yourself from grabbing your tribesmen’s portions from out of their hands when you were done with yours, a transgression that would have been severely punished.

If self-control were such an important selective pressure, Baumeister argues, it probably should be a biologically expensive skill. He points to experiments by Gailliot et al., which demonstrate low blood glucose levels to be associated with poor performance on self-control tasks. After drinking a glass of lemonade with sugar, performance was restored. Baumeister suggests self-control warrants the use of large amounts of biological energy due to its adaptiveness. Not only does it improve decision-making abilities, but it has been correlated with success, likability, and health—an evolutionary goldmine.

This is all fine and good, but if free will is limited, and (to add insult to injury) a simple biological product of descent with modification, why the adamant human belief in freedom of choice? Why that unshakable part of us that says: we are not drosophila! We are not rats! We are different, we are people! Baumeister speculates belief in free will may have become so strong because it is beneficial to a healthy society. Envision a social order where we are not held responsible for our actions. Or imagine trying to trust and cooperate with others when you know there is no moral imperative pushing them to act a certain way. It would be disastrous. Belief in free will could keep a society healthy, even if that free will was, in actuality, limited.

Thus, according to Baumeister’s view, free will is largely illusory. We are primarily ruled by biological drives, and free will is our way of moderating those impulses to keep them from being socially discordant, and thus counterproductive. We are so quixotically convinced of our freedom of choice due to separate, but related, selection pressures that are necessary for the existence of functional societal groups.

Studying neuroscience and/or genetics will inevitably lead one into such intellectual morasses. It is difficult to imagine that debates about personal responsibility won’t become more intense as our knowledge of genetic and biological predisposition grows. Perhaps that knowledge will lead to a better understanding of free will and our inclination to believe in it. Until then what our reality consists of—free will, determinism, or somewhere in between—is up for debate. Choose whichever you’re most comfortable with.

Reference:

Baumeister, R.F. (2008). Free Will in Scientific Psychology. Perspectives on Psychological Science, 3(1), 14-19. DOI: 10.1111/j.1745-6916.2008.00057

Tuesday, February 12, 2008

Further Proof "Junk DNA" Has Value

The prevalence of noncoding regions of DNA in the genome of humans and many other eukaryotic organisms has long been a subject of controversy. DNA is composed of alternating areas called exons and introns. When DNA is transcribed to mRNA (the first step of protein synthesis), the introns (from "intragenic regions") are spliced out before the final mRNA sequence is formed. The exons become part of the mRNA and can code for amino acids involved in protein formation. The function of introns has been a mystery, and their ostensible superfluousness has caused some to refer to them as “junk DNA”.

Numerous hypotheses have been developed to explain the existence of introns. Some have suggested they are merely relics of old genes that no longer have any use to us, and thus have become non-functional. Others have postulated that junk DNA provides a buffer around integral genes to save them from being cut when portions of chromosomes “cross over” in meiosis. But many other scientists have not been satisfied with the suggestion that such a large portion of a genome (by some estimates up to 97%) would have a passive, or even meaningless, role.

Included in that group is a team of researchers at the University of Pennsylvania School of Medicine, who have discovered an important role in cellular function that is played by an intron. In 2005, they found that dendrites, the branch-like arms of a neuron that receive input from other neural cells, have the ability to splice mRNA. This was previously thought to occur only in the nucleus of cells. More recently, the group discovered an mRNA outside the nucleus that contains an intron. The mRNA encodes for a protein important to the functioning of the dendrite.

When the group removed the intron from the mRNA and left a spliced RNA molecule in the cell, the electrical properties of the cell became irregular. They believe the intron plays an integral role in guiding the mRNA to the dendrite, and may be involved in determining how many mRNAs are brought there to form electrically conducting channels. To serve this function, the intron may be spliced out of the mRNA by the dendrite and then incorporated into the dendrite itself. The details are not yet certain, but what is clear is this particular intron has an essential role in the cell, thus bringing the moniker “junk DNA” further into question, and inviting more research into the greater part of our genome.

Monday, February 11, 2008

Autism May Involve Limited Awareness of Self

As the prevalence of autism continues to rise—for reasons that are still unknown—researchers are frantically trying to understand the disorder. Autism consists of a spectrum of behaviors, such as repetitive or ritualistic behavior, self-injury, impaired language ability, and limited communication skills. One of the most commonly held views on autism has been that those who are afflicted have a decreased capacity to feel empathy, or to understand that other people have their own mental states, desires, and intentions. Neuroimaging studies with autistic individuals have seemed to support this idea.

Other studies have indicated there is a diminished ability to recognize self in the autistic mind as well. A neuroimaging study released this week supports that hypothesis. The experiment used fMRI and a relatively new technique called hyperscanning to measure brain activity of autistic and non-autistic adolescents as they played an interactive game together. Hyperscanning is an imaging method that allows multiple people to be scanned with fMRI simultaneously as they communicate with one another.

The researchers compared the fMRI images to images they had taken of athletes’ brains as they imagined themselves taking part in athletic activities. They found this focus on “self” caused high activity in the cingulate cortex, an area previously implicated in self-awareness and social interaction. This area was also highly activated in the non-autistic participants in the game as they thought about what action they would take. It contrasted with a different pattern of activity that occurred when they thought about the actions of their partner. Although the autistic participants were able to play the game effectively, they showed much lower levels of stimulation in the cingulate cortex. The activity was also negatively correlated with the severity of their autistic symptoms (the more severe the symptoms the lower the activity).

All of the autistic participants in the study were considered high functioning, with normal or high normal intelligence quotients. The research group plans to conduct more imaging experiments in the future with autistic individuals who have lower IQs. For now, this experiment may shed more light on the brain mechanisms underlying autism. It also adds more complexity to the problem, however, as it appears a deficit in the awareness of others may not be all that’s involved.

Sunday, February 10, 2008

Baby Math Geeks

ResearchBlogging.orgThere are certain abilities that humans develop with such universality it seems as if our brains might be specifically designed to acquire them. One example is language. About fifty years ago, most psychologists believed children learned language by imitating the adults around them, then refined it by receiving feedback about the accuracy of their utterances. Famous linguist Noam Chomsky was the first to point out, however, that children have an ingenious ability to create sentences they’ve never heard before, and the speed with which they pick up their native language is much quicker than any realistic learning curve. Chomsky posited there must be something inherent in the architecture of our brains that expedites learning language at a young age. Since Chomsky’s contributions to what became the cognitive revolution in psychology, a great deal of research has been done that supports the theory that people have a natural inclination toward acquiring language. Children will develop language abilities along the same general timeline regardless of the opportunities they have to learn from adult examples (excluding cases of severe isolation or abuse).

Some believe the concept of number also has an innate origin. Neuroimaging studies have indicated there are specific brain areas, primarily in the parietal lobe, that are associated with counting objects. These areas have been shown to be active during counting tasks in children as young as four years old. Behavioral experiments have demonstrated that even 6-9 month old infants have some basic understanding of numerical concepts (such as knowing if you have one object and add another you should end up with two objects, not one). This proclivity to understand numbers at such a young age might indicate that the human brain is organized to facilitate the learning of numerical concepts, just as it is with language.

The imaging evidence for a numerical region in the parietal lobe comes primarily from adults and children four or older. A group of researchers, Veronique Izard, Ghislaine Dehaene-Lambertz, and Stanislas Dehaene, wanted to determine if this localization of numerical function existed in three-month-old infants. They used electroencephalography (EEG), which measures electrical activity of the brain through electrodes placed on the scalp, to ascertain which areas of the infants’ brains were active while they watched animal-like objects on a black background. Sometimes the objects would change in appearance, other times they would differ in number.

The group found a clear distinction in brain activity between viewing object appearance and object number. When the type of the objects was changed, activity was recorded in the temporal cortex. This corresponds to regions associated with object differentiation in adults and older children. When the number of the objects was changed, however, there was activity in a network that spread from the parietal to the prefrontal lobe. This network is similar to that seen in object number recognition in adults and older children.

These results indicate there may be an inborn mechanism in the brain for understanding numerical concepts. This shouldn’t be too shocking, as it seems an understanding of numbers would be crucial to evolutionary survival (what’s more dangerous: one predator or four?). Nevertheless, it is important to remember that the concept of being born with a mind like a tabula rasa maintained its popularity throughout most of the last century. Experiments like this one suggest there are at least some basic guidelines for us to follow on that slate we are born with.


Reference:

Izard, V., Dehaene-Lambertz, G., Dehaene, S. (2008). Distinct Cerebral Pathways for Object Identity and Number in Human Infants . PLoS Biology, 6(2), e11. DOI: 10.1371/journal.pbio.0060011


Friday, February 8, 2008

The Chicken and the Egg of Alzheimer’s

Alzheimer’s disease (AD) is the most common form of elderly dementia, affecting over 25 million people worldwide. Some estimates put new cases of AD at 4-5 million per year (one new case every 7 seconds). The neurodegenerative effects of AD are devastating, causing cognitive deterioration that can lead to invalidism and drastic memory loss. Furthermore, AD is a frustrating disease to scientists and doctors because, although there are signature neurological changes that accompany its progression, the etiology remains unknown.

Two of the hallmark signs of AD—and putative culprits for its causation—are amyloid plaques and microglia. Amyloid plaques, also known as senile plaques, are buildups of fibrous protein material known as amyloid. They are not specific to AD, and can be seen in the brains of older people in general. They are much more prevalent in an AD-afflicted brain, however, and are associated with neurodegeneration. Microglia, on the other hand, are part of the immune system of the brain. Their job is to detect foreign agents in the brain and quickly eradicate them. The substances microglia secrete to protect neurons, however, can also be neurotoxic in excess.

There has been an ongoing debate among scientists as to what comes first in AD, the plaques or the microglia? Some have argued that overactivity of microglia causes the plaques to appear, leading to AD. Others have asserted the opposite, that the plaques develop, causing microglia to accumulate as an immune response. This in time becomes neurotoxic due to continued development of plaques.

A group of researchers from Harvard Medical School believe they have answered the question at the heart of this debate. The group experimented with mice that were genetically engineered to develop amyloid plaques. They surgically placed a small window in the skulls of the mice so they could check for plaque formation daily. They found that amyloid plaques formed independently, which instigated an immune response by microglia. The microglia actually seemed to limit the growth of the plaques, but cognitive degeneration still progressed. The researchers suggested this was because amyloid fragments were being broken off of the plaques and damaging surrounding neurons.

Another important finding from the study was that the plaques formed much more quickly than previously thought possible—sometimes in only one day. Cognitive impairment consistent with AD followed several days afterward. This came as a surprise, as it was believed plaque formation should take weeks, or months, to occur. While the researchers are quick to note this was an experiment with mice and may not be directly applicable to humans, they also point out this may underscore the importance of developing prophylactic treatments for plaques in order to defeat AD. There are currently drugs in clinical trials that are designed to do just that, by inhibiting an enzyme integral to amyloid development. The results of the phase III trials of one such drug should be released this summer.

Thursday, February 7, 2008

Striatum Found to be Integral to Addiction in Rats

A certain group of brain structures, often referred to as the “reward system”, has long been recognized by scientists as having an important role in addiction. The structures of the reward system are related in that they are all part of a major dopamine network, the mesotelencephalic dopamine system. Dopamine neurons that arise from the ventral tegmental area (VTA), a nucleus in the midbrain, and project to the nucleus accumbens (NAc) are thought to play the largest part in addiction. The NAc is part of the striatum, a subcortical region that gets its name from its combination of white and gray matter that give it a striped, or striated, appearance.

David Belin and Barry Everitt of the University of Cambridge have been studying this system in rats. Their goal was to elucidate the mechanism that causes early experiences with drugs to turn into the compulsive drug-seeking behavior characteristic of addiction. Previous studies have suggested that addiction occurs when behavioral control over drug-seeking is transferred from the ventral to the dorsal striatum.

Belin and Everitt taught a group of rats to press a lever to receive cocaine. When their behavior became compulsive, the researchers severed the connection between the two striatal areas. This caused the compulsive behavior to diminish. In a second experiment, the researchers showed rats with the severed connection could still be trained to push a lever for a reward. It seems the disconnection affects only the compulsivity associated with addiction. The severed striatum disrupts a major line of dopamine transmission that Belin and Everitt feel must be essential for addiction to occur.

Wednesday, February 6, 2008

Sweet Dreams, C. Elegans

Sleep is sometimes a vexing subject for scientists. We spend about 1/3 of our lives doing it. Yet, despite all the progress that has been made in discovering the reasons behind myriad other human behaviors, there is still no consensus on why we sleep. Some believe it has a recuperative effect on the body, allowing energy stores to be replenished. While a good night’s sleep may certainly allow us to feel more rested, this theory doesn’t explain the necessity of sleep, as the same result presumably could be obtained by lying still for eight hours. Others suggest sleep is an evolved, adaptive behavior that protected our ancestors from too much activity during the night—a dangerous time due to their inability to spot nocturnal predators. This also is an unsatisfying concept for several reasons, one being that some animals have evolved methods to enable sleep even though the act itself puts them in danger (e.g. bottlenose dolphin). Another hypothesis is that sleep is a necessary part of memory consolidation and mental functioning. While there is a great deal of debate over the particulars of this theory, it has more evidential support than the other two hypotheses.

Researchers at University of Pennsylvania School of Medicine are hoping to learn more about the purpose of sleep by studying Caenorhabditis elegans, a roundworm used as a model organism for many of the same reasons I outlined in my post about drosophila research. The group is the first to show that nematodes do experience sleep. They discovered a period in the worm’s development, which they termed lethargus, that is similar to sleep in other animals. The fact that sleep can be found in organisms so evolutionarily distant from us is further support for the idea that sleep is necessary, and not just an evolved, adaptive behavior.

But the changes that occur in the worm during lethargus may also give some clues as to the purpose of sleep. While C. elegans is in this phase of quiescence, numerous synaptic modifications take place within its nervous system. Thus, the researchers at Penn postulate that lethargus is a period in roundworm development that is necessary for nervous system growth. As synaptic changes occur during sleep in mammals as well, this lends support to the idea that sleep is necessary for proper brain functioning and development.

The group also identified a gene in C. elegans that regulates sleep. It has a human homologue that, although previously known, has not been studied in relation to sleep. The researchers hope these findings in the roundworm will eventually provide insight into the human sleep process and bring us closer to solving the mystery of why we spend so much of our lives in a seemingly nonproductive state.

Monday, February 4, 2008

Body Integrity Identity Disorder

There was an article in the last issue of Scientific American: Mind I have been wanting to discuss, but I keep getting sidetracked. So, I’ll return to it now before I forget about it. The article focuses on a disorder that is slowly gaining more attention from both medical professionals and the public. Once resigned to guests appearing occasionally on the Jerry Springer Show and often considered an urban legend, the affliction is now taken seriously and considered very legitimate. It has been termed body integrity identity disorder (BIID), and is characterized by an irrepressible feeling of dissociation from part of your body, along with a desire to have that limb(s) amputated.

If you have not heard of it, it may sound a little outrageous, but those who suffer from it appear to experience serious mental anguish. And it is far from simply a cry for attention, as some of these patients have drastically taken matters into their own hands to alleviate their distress. One such patient wanted to be rid of both legs. After a surgeon refused to comply with his wishes, he obtained 100 lbs. of dry ice and buried his legs under it for six hours. He then went to the hospital with legs so frozen the tissue soon turned black. Eventually the doctors had no choice but to fulfill his original wish, and amputate both legs for his own safety.

There is no consensus on why this disorder occurs, or how it should be treated. Some consider it similar to gender identity disorder, as both begin early in life and are centered around a desire to fundamentally change some part of one’s body. Others attribute it to the patient being raised in an environment where he or she received little love and attention, causing them to envy the sympathy amputees might evoke. Many believe there must be a neurological origin to such an overwhelming obsession.

That neurological basis could involve a distortion in body-mapping regions of the cerebral cortex. One such region is the primary somatosensory cortex, an area of the parietal lobe to which sensory information of touch is relayed from all parts of the body. Just anterior to (in front of) the somatosensory cortex is another body-mapping area, the primary motor cortex. This region is involved in movement, sending information on planned muscle activation throughout the body. Both of these areas are subdivided into sections that deal with each specific part of the body, creating “body maps” of neural activity in the cerebral cortex. BIID could stem from a lesion or other disturbance in one of these areas.

Unfortunately, it is still not known exactly what is happening in the minds of these patients, nor is there a protocol for how to treat them. Traditional psychiatric medications don’t seem to have any effect. Surgery to remove the limb(s) in question so far has been the only treatment that has shown consistent success. For obvious ethical reasons, most doctors refuse to participate in such surgery. Others have granted the surgery, however, and those patients who do undertake it seem to feel gratified and at peace with their bodies for the first time in years. The medical field is far from agreement on how the disorder should be handled, but hopefully as recognition of the legitimacy of BIID continues to grow, so will efforts to develop a suitable treatment.

For more information on BIID, visit www.biid.org.

Sunday, February 3, 2008

Unseen Drug Cues Can Still Induce Cravings

People generally look at addiction in one of two ways. Some understand the extreme difficulty inherent in battling a craving, whether it be for drugs, food, cigarettes, etc. This is especially the case with those who have experienced addiction, and to a lesser extent with those who have helped someone else through it or studied it extensively. Others may attribute addiction to a personal choice, as if a drug user is able to simply sit down and decide whether or not he or she wants to do drugs today, and then acts on that decision.

The science doesn’t support the latter view. A number of neuroimaging studies demonstrate abnormal activity in the brains of drug addicts. This aberrant activity can involve decision-making areas (pre-frontal lobes), areas correlated with compulsive behavior (anterior cingulate and orbitofrontal cortices), and reward processing centers (mesolimbic dopamine pathway and limbic system). Now consider the genetic influences on tendencies toward addictive behavior and it begins to seem that, if a personal decision is involved, the deck is stacked pretty heavily against an addict making the right one. Drawing an analogy of addiction to personal choice is like telling someone who is clinically depressed they just need to “snap out of it”.

Recent research by a group at University of Pennsylvania has shown addiction to be even more difficult to overcome than previously thought. The group used functional MRI (fMRI) to study brain activity in cocaine-addicted patients as they viewed images with different themes. Some pictures were drug stimuli (e.g. a crack pipe), others were sexual in nature, and the rest were either aversive (i.e. involving pain) or neutral. The images were flashed on a screen for 33 milliseconds, which is not considered long enough to allow for conscious processing. They were followed up by another image used to distract the participant and ensure the first picture could not be registered—a technique known as backward masking.

The patients exhibited brain activity consistent with drug craving when the drug images were shown, even though they were not visible long enough to be consciously processed. This activity occurred throughout limbic structures previously implicated in reward and drug addiction, such as the striatum, amygdala, orbitofrontal cortex, insula, and prefrontal cortex. The activity was similar to that which occurred when the sexual images were viewed.

The Penn research group has hypothesized that the recognition of the drug cues is the result of pure Pavlovian conditioning. According to this view, drug craving becomes a learned reflexive response. With exposure to relevant stimuli, its onset is automatic. The group also suggests that the similarity between how the drug stimuli and sexual stimuli were processed indicates these patients’ brains are viewing drugs in the same way they are viewing biologically rewarding stimuli. Many scientists believe our brains have evolved to view life-promoting goals like procreation and food as rewarding because they are essential to our survival as a species (thus the desire to achieve them was naturally selected for). It seems drugs can hijack the brain’s reward circuitry and convince it drugs are as important as the rewards we have been genetically programmed to pursue. This presents a grim picture of addiction. In my opinion, however, studies such as this one are important in coaxing our society to view addiction as a disorder that requires treatment, instead of a social stigma or a behavior that in and of itself necessitates incarceration.

Friday, February 1, 2008

Don't it Make My Brown Eyes Blue

We focus quite a bit on eye color. People find certain eye colors more attractive than others. Whether or not you can remember someone’s eye color is used as a gauge of how well you know them (much to the chagrin of men). In short, we have come to consider variation in eye color as an important part of who we are. Thus, it’s hard to imagine a time when everyone had the same color eyes: brown.

That’s the image a group of researchers from the University of Copenhagen are asking us to conjure up. They have recently reported finding a genetic mutation that occurred 6,000-10,000 years ago, which resulted in the first pair of blue eyes. The mutation, they assert, affected a gene called OCA2. This gene encodes for a protein involved in the production of melanin, a polymer responsible for the pigmentation of our skin, hair, and eyes. The mutation causes less melanin to be produced in the iris. This, according to the research group, is what caused eyes to become diluted from a universal brown, to blue.

How the heck could they know this, you might ask? The answer is by studying mitochondrial DNA (mtDNA). Every cell in our body contains DNA within its nucleus. This nuclear DNA is the type most of us are familiar with, and the type used for paternity testing, forensics, disease screening, etc. But mtDNA is a less publicized form of DNA that resides in the mitochondria, organelles outside the nucleus that provide energy for the cell (remember the “powerhouse of the cell” from grade school?). When two people procreate, their nuclear DNA is mixed, resulting in offspring that possess a combination of their genetic codes. The mtDNA from the father, however, never makes it into the egg with the sperm. It is left behind, and only the mother’s mtDNA is passed on. Thus, one’s mtDNA represents their pure line of maternal ancestry.

Geneticists can use mtDNA to obtain information about that ancestry back through many generations. To do this, they examine the mtDNA and compare it with samples from other individuals. They can then create a “family tree” of mtDNA. This was how scientists inferred the most recent common ancestor of humans was a woman from Africa (often referred to as “Mitochondrial Eve”).

The team from the University of Copenhagen studied the mtDNA of a group of 800 people from all over the world to estimate the time of the eye color mutation. A member of the team, Hans Eiberg, described the mutation as neither positive nor negative. This means it didn’t create a selective advantage or deficiency, but was simply the result of genetic shuffling. To me this can’t be said with conviction, however, as blue eyes may have possessed an advantage in sexual selection (meaning they were found attractive, leading to the trait being passed down more often through procreation). Or maybe I’m just inclined to think that way because I have blue eyes…