Thursday, May 29, 2008

Robots Controlled by Monkeys May One Day Enslave Humans

Or they might just eat all of our marshmallows and fruit. Either way, I’m basing my prediction on research published online this week in the journal Nature. Scientists at the University of Pittsburgh School of Medicine developed a robotic arm that they attached to monkeys, whose actual arms were restrained. Using signals from their brains alone, the monkeys were able to control the robot arms to feed themselves marshmallows and fruit.

In order to accomplish this, probes are inserted into the monkeys’ brains, specifically into the motor cortex, where voluntary movement is initiated. The probes can detect the stimulation of motor neurons when a monkey desires to move its own arm to reach for a piece of food. The electrical activity is fed into a computer program, which translates it into an analogous movement of the robotic arm.

This isn’t the first time brain activity has been indirectly converted into some form of external action. Past successes have primarily been with simpler tasks, however, such as the movement of a cursor on a computer screen.

In the case of the robotic arm, the monkeys must learn to visualize reaching for the food. The movement of the arm is reasonably fluid and the researchers suggest the monkey eventually comes to think of it as a natural extension of its own body.

This type of device could have amazing potential for sufferers of spinal cord injuries or “locked-in” conditions like Lou Gehrig’s disease (amyotrophic lateral sclerosis).

Have a look at the robot arm in action below. Emotions may range from kinda cute to kinda creepy.

Wednesday, May 28, 2008

Improving Electroconvulsive Therapy

Electroconvulsive therapy (ECT) is thought by many in the general public to be a brutal and inhumane form of treatment. This perception likely has a number of causes, including improper use and administration in its earlier days, its depiction as a method of torture in fictional accounts like that found in One Flew Over the Cuckoo’s Nest, and perhaps even as a backlash against invasive psychological procedures, which may have grown out of the disastrous frontal lobotomy experiments.

The truth, however, is that when ECT is applied properly, it can be an effective form of treatment for those who suffer from severe depression. At times it may be the only form of treatment (besides talk therapy) for a subsection of this group who don’t respond to antidepressant drugs. The exact mechanism by which it works is poorly understand, although there are indications it may prompt production of brain derived neurotrophic factor (BDNF) in patients who don’t gain this beneficial effect from drugs. (See my post here for more info on the importance of BDNF levels in depression.)

ECT is not without its side effects, however. Even when administered appropriately, it can result in retrograde and/or anterograde amnesia. Other cognitive problems (e.g. disorientation) have also been noted. In most cases, these side effects clear up fairly quickly after treatment. Occasionally they are found to linger for weeks or even months, though—enough to make one hesitant to use, or agree to undergo, an ECT treatment.

A group of scientists from Columbia University, however, have just released a report detailing a form of ECT that not only has higher rates of effectiveness than standard ECT, but also results in less cognitive side effects. The group used a different form of electrical pulse called an “ultrabrief pulse”, and compared the outcome to the use of traditional ECT in a group of 90 depressed patients. The ultrabrief pulse lasts about 0.3 milliseconds compared to the traditional 1.5 milliseconds.

Of the 90 patients, 73% responded to the ultrabrief pulse, compared to 65% who responded to the standard form of ECT. More importantly, the group who received the ultrabrief pulse reported less severe cognitive side effects than the traditional group. They were monitored for a full year after treatment.

This may be promising news for severely depressed patients. If ECT can be administered with lower rates of concomitant cognitive dysfunction, it may become a more viable alternative for those who don’t respond to today's antidepressants. Another important step in the use of ECT, however, is to be able to fully understand why it works—something that has yet to be elucidated.

Saturday, May 24, 2008

The Neuroscience of Distributive Justice

ResearchBlogging.org
Since the emergence of philosophical thought, an unresolved debate has persisted about a general definition of justice and equity. An aspect of that debate involves distributive justice, or how goods and benefits should be dispersed throughout a society in a fair and just manner. As an extreme example of this dilemma, imagine you are commissioned to deliver 100 lbs. of food to a famine-stricken region that consists of two villages a hundred miles apart. If you deliver half of the food to the first village, then travel to the second, 30 lbs. of the food will spoil during the trip. Would you deliver all of the food to the first village, or provide each village with only 35 lbs. of food in the pursuit of equity? What if you knew that 35 lbs. of food was not enough to fully alleviate the suffering of either village until the next shipment of food arrived?

Philosophers have offered several solutions to debates of this nature. Utilitarianism, a concept with ancient roots but most frequently associated with Jeremy Bentham and John Stuart Mill, asserts that one’s primary goal should be the achievement of a maximal amount of good or happiness. In the situation described above, a utilitarian might opt to deliver all of the food to the first village. This would maximize the sum of individual fulfillment, while the halving of the food would maintain a static level of suffering. Thus, delivering all of the food to the first village is the greater good.

Another approach to such a quandary is known as deontological ethics, which emphasizes not the consequences of one’s actions, but whether the actions are right or wrong, just or unjust. From a deontological perspective, it would be unjust to distribute the food unequally. A desire for some degree of fairness in all dealings seems to be a universal human trait, something deontologists point to in support of their doctrine.

Another question about distributive justice involves the extent to which emotion plays a role in the decisions it calls for. Many philosophers, both ancient and contemporary, assert that rational thinking is what allows us to make choices in difficult situations like the one above. Others argue that the processes behind those decisions cannot be devoid of an emotional influence, specifically one of an empathetic or sympathetic nature.

A study in this week’s Science examines distributive justice from a neural perspective, asking: what areas of the brain are active when we make such decisions? To find out, researchers used fMRI to scan the brains of 26 adults while they made decisions about allocating money to groups of children living in an orphanage in Uganda. During the allocations, the participants were forced to make a number of decisions that involved trade-offs between efficiency (analogous to Utilitarianism) and equity (deontology).

The investigators found that distinct neural regions are activated in the consideration of equity and efficiency. The putamen, a mid-brain structure that forms part of the dorsal striatum, seemed to be correlated specifically with efficiency. On the other hand, activity in the bilateral insular cortex was correlated with inequity. Regions of the caudate were activated by both. They also found that individual differences in aversion to inequity corresponded with higher neural activity in the insula.

Overall, the participants showed the greatest neural reaction to an inequitable distribution of food, leading the authors of the study to speculate that distributive decisions are made to avoid inequality more so than to engender efficiency. Thus, the results of this experiment seem to support the deontological argument. As the insular cortex is thought to play an important role in emotional processing, the experiment also indicates that our decisions are not devoid of an emotional element (contrary to the beliefs of Kant and Plato).

Thus, the imaging evidence from this study may help to explain why the debate over distributive justice has never been resolved. The concepts of equity and efficiency, and their respective values, are deeply rooted in our brains. Perhaps evolution never resulted in the disappearance of one or the other because they both are valuable in the decision-making process, depending on the situation. When all is said and done, though, it may be that the evolutionary value of fairness overrides that of efficiency.

Reference:

Hsu, M., Anen, C., Quartz, S.R. (2008). The Right and the Good: Distributive Justice and Neural Encoding of Equity and Efficiency. Science, 320(5879), 1092-1095. DOI: 10.1126/science.1153651

Wednesday, May 21, 2008

Does Money Affect the Way You Think?

ResearchBlogging.org
Money, perhaps more so than any other modern symbol, can elicit a vast array of emotions (depending to a large degree on its abundance in one’s life), including yearning, anxiety, pride, greed, envy, depression, and happiness. Of course there is not simply a direct correlation with money and any one of these emotional states, such as more money equaling more happiness or vice versa. In fact, past research has found that the effects money has on one’s well-being can be very disparate. On one hand, having more money may be good for your health and emotional state. On the other, people who place a high value on money have been found to have poorer social relationships than those who are more moderate in their view toward the attainment of wealth.

A group of researchers recently conducted a series of experiments to explore this paradoxical aspect of affluence. They formulated two hypotheses about the dual nature of money in the modern world. First, since money is the basis of most exchange in today’s society, they suggested that the thought of money should make people more focused on cost-benefit analyses and a market-pricing view of their environment. They thought that this perspective might encourage more emphasis on individual performance, since money is often correlated with the completion of personal tasks in our business-based economy. They predicted people with money on their mind would think of life in terms of inputs and outputs, with an awareness that greater input should result in a greater output.

They also hypothesized that the market mentality, while beneficial for personal performance, might hinder one’s ability to interact socially. Because it fosters a focus on individual performance, it might cause a decrease in sensitivity towards the needs of others.

To test their hypotheses, they used several different methods of exposing participants to money-related cues, while attempting to make the cues subtle enough that the subjects wouldn’t be aware of their presence. In one experiment, some participants sat at a desk with a screensaver that depicted money, while others saw screensavers of fish or flowers. In another, participants had to organize phrases that were or were not related to money, such as “I cashed a check” or “I wrote the letter”. Several other methods of exposure to money cues were used.

After being exposed to the cues, the participants were put in various social situations that tested their desire to be helpful, generous, sociable, or industrious. For example, to test willingness to help, a confederate would walk by and drop a handful of pencils (27 to be exact). Or, in another situation, a confused colleague would ask for assistance in understanding a task they were attempting to complete. Those who were exposed to money cues picked up fewer pencils, and those who weren’t spent 120% more time helping the confused colleague.

When given an opportunity to donate a portion of $2 the participants were given at the start of the study, those who had been reminded of money donated 39% of their payment, while those who hadn’t been donated 67%. They also, when allowed to situate the chairs in a room while waiting for another person to arrive, put more distance between their chair and the other person’s than the money-na├»ve group. When given a list of solo vs. group activities to take part in, the money-exposed group chose more individual activities than the control group (even when some activities included family members and friends).

With the choice of working on a task alone or getting help from a peer, the money-reminded participants chose to work alone, even though it meant doing more work. When faced with a challenging task, they spent 48% more time working at it before seeking help from the experimenter.

The researchers suggest these results may appear because a money-oriented person is focused on the inputs and outputs of the market, a view that tends to lead to an emphasis on individualization and self-sufficiency. They found no changes in emotion between the two groups, and thus assert that the differences in behavior are probably not due to a distrusting of others. Additionally, the fact that those who were reminded of money chose to persist on a task before asking for help indicates the results are not based purely on selfishness, as a selfish person would not have been so eager to do more work than necessary.

Regardless, the results do suggest that money can inspire an aversion to social interaction and a focus on the self. In modicum, however, this may be a necessary part of a capitalistic society, where one is forced to place an emphasis on ensuring they are treated equitably and compensated fairly for their work—and where they are forced to compete for their livelihood. An interesting follow-up to this experiment would be to use neuroimaging to see what is going on in the brains of participants when they make decisions after exposure to money cues, and how it is different from controls.


Reference:

Vohs, K.D., Mead, N.L., Goode, M.R. (2008). Merely Activating the Concept of Money Changes Personal and Interpersonal Behavior . Current Directions in Psychological Science, 17(3), (in press).

Monday, May 19, 2008

microRNAs and Schizophrenia

Over the past twenty years, our understanding of gene expression has grown tremendously. As is often the case, however, with that increased level of comprehension has come a realization that the process is even more complex than originally thought. Thus, the relatively simple model of mRNA being transcribed from DNA, then traveling to ribosomes where it is translated into proteins (with the help of tRNA and rRNA), is now thought to be just a rough summary of the process. A number of other molecules, such as transcription factors (TFs) and microRNAs (miRNAs), are also involved in the expression of genes.

TFs are proteins that bind to sections of DNA and control the transfer of genetic information from DNA to RNA. They are integral to development, management of the cell cycle, responding to environmental changes, and intercellular communication. miRNAs are small, single-stranded RNA molecules that are transcribed by DNA but not translated into proteins. They are complementary to a particular section of mRNA, and by binding to mRNA can suppress gene expression. TFs and miRNAs can control anywhere from dozens to hundreds of genes in the human genome, with some estimates being much higher.

Fully understanding the role of TFs and miRNAs is essential for uncovering the etiology of genetically based disorders. Recently researchers at Columbia University Medical Center (CUMC) found that changes in miRNA levels can result in cognitive and behavioral deficits. They believe miRNAs could be involved in the development of schizophrenia in humans.

In the past, a higher incidence of schizophrenia has been correlated with a deletion of a small part of chromosome 22, at a location designated as q11.2. One of the genes in that chromosomal section is called Dgcr8. It plays an integral role in miRNA production. Thus, the researchers at CUMC hypothesized that the absence of Dgcr8 and the resultant reduction in miRNAs might be part of the etiology of schizophrenia.

They engineered a strain of mice that lacked the Dgcr8 gene. As they predicted, the mice were found to exhibit the same behavioral and neuroanatomical deficits seen in people with schizophrenia.

While this is an important step in understanding one of the most perplexing disorders medicine has ever had to confront, it is not exactly heartening. miRNAs have widespread effects on gene expression throughout the brain. This may help to explain why schizophrenia has been so difficult to decipher, as it is probably the result of a number of genetic aberrations. Unfortunately, though, it is further indication that schizophrenia is very complex, and much more investigation will be needed to fully comprehend its origin.

Wednesday, May 14, 2008

Would You Vaccinate Your Kids Against Drugs?

ResearchBlogging.org
This is not just a question intended to incite thought or debate, it’s an issue that any future parents, or parents with children under the age of 10 may actually be faced with before your child reaches 18. Clinical trials are currently underway for vaccines intended to treat cocaine and nicotine addiction, respectively. Both have been shown to be effective without any adverse effects in phase I trials, and have moved on to phase II. So, if the treatments continue to demonstrate efficacy without harm, it is conceivable they could be available for use in humans within a decade.

Cocaine has proven to be one of the most frustrating drugs of abuse for the pharmacology field because, unlike heroin (methadone), alcohol (naltrexone), and nicotine (buproprion, nicotine gums, etc.), no accepted pharmaceutical treatment for cocaine dependence has been developed. Yet cocaine is one the most addictive drugs of abuse, as well as one the most widely used, with over 14 million users across the globe. According to Scientific American, reducing the rate of cocaine use in the United States alone could result in a savings of $745 million in medical, legal, and other related expenses.

A failure to find acceptable methods of treatment for cocaine addiction has led researchers to investigate the plausibility of using immunotherapy. Immunotherapy involves administering a vaccine to raise an immune response against the drug. In order to do this, the drug must be delivered along with an immunogen, or antigen. An immunogen is a substance, often a protein, which can cause an immune response. Since the drug obviously cannot raise an immune response itself (or drugs wouldn’t be so popular), the drug is linked to an antigen and then administered to the patient. When the immune system senses the presence of the antigen, antibodies bind to it. This antibody-immunogen complex is too large to pass the blood-brain barrier, causing most of the drug to be unable to enter the central nervous system (CNS). This drastically reduces the influence of the drug, for the most part eliminating the rewarding quality of its use.

Since initial vaccines were developed, research has uncovered even more effective methods of vaccination against cocaine use. A few years ago a group at The Scripps Research Institute found a monoclonal antibody that has an extremely high affinity for cocaine. When displayed on the coat of a bacteriophage, they found the antibody could be carried past the blood-brain barrier and into the CNS, where it could be even more efficient at diminishing the effects of cocaine.

A bacteriophage is a virus that infects bacteria. They usually are made up of genetic material enclosed by a protein coat. Despite the nocuous connotation to their name, they are not dangerous to eukaryotic cells. They are useful as vectors because they tend to be very durable and able to withstand great variations in external conditions. Their ability to pass through the blood-brain barrier made them a great candidate for an immunogen. Their use in this study resulted in significant reductions in the psychostimulant effects of cocaine on rats.

A form of this vaccine developed by The Scripps Research Institute is now working its way through the clinical trial process. A vaccine against cocaine or nicotine would still necessitate some aspect of compliance, however. From most indications, a vaccine would require several injections over a period of up to 3 months to take effect. After that, regular vaccinations every 2 to 6 months would probably be necessary.

So, for adults, the desire to get treatment (or a court-ordered treatment in certain situations) would be a necessary first step. For minors, however, it’s conceivable a parent could be given the option of mandating a vaccination schedule, whether it be therapeutic or preventative. So, for all you parents with children who won’t be 18 within the next decade, what will you do?

Important associations between a drug and its rewarding quality are made within the first several uses of the drug. Would you take the steps to vaccinate your child against nicotine, so that when she tries cigarettes the first few times she will spit them away with disgust and wonder what the big fuss is all about? Or against cocaine, so if he is at a party and happens to try it, he will not experience a rewarding effect? Would you tell her that you are vaccinating her? If you did, it might still allow her to harbor some curiosity about drug use, as she would know that, while vaccinated, she’s not experiencing its “real” effects. This could make him more inclined to try a drug after age 18, when you can no longer have such a peremptory influence. Think about it, it may be a decision you will one day have to make…


Reference:

Carrera, M.R. (2004). From the Cover: Treating cocaine addiction with viruses. Proceedings of the National Academy of Sciences, 101(28), 10416-10421. DOI: 10.1073/pnas.0403795101

Monday, May 12, 2008

Encephalon #45

It is up at PodBlack Blog, in tribute to Erik Erikson on the anniversary of his death. Enjoy and mourn, in whatever proportion you deem appropriate.

Saturday, May 10, 2008

Ketamine and Depression

Ketamine is a drug with a very wide range of uses. Developed in 1962 to be an alternative anesthetic to phencyclidine (PCP), it was first used as a battlefield anesthetic. It eventually became a popular veterinary medicine, used for anesthetic purposes with small animals (e.g. cats) and as an analgesic for larger animals like horses. It also became an established recreational drug, known for its psychedelic side effects and commonly referred to as “special K”.

Several years ago doctors noticed an unexpected behavioral effect while using ketamine to treat complex regional pain syndrome (CRPS) in human patients. It appeared to alleviate symptoms of depression associated with the CRPS. Further studies verified this therapeutic effect, while noting one advantage over other contemporary antidepressant medications: it began working within 24 hours of the dose.

This aroused great interest in understanding the mechanism of ketamine. Due to its side effects, most were unwilling to advocate the use of the drug itself. But if its method of action could be elucidated, then perhaps similar quick-acting antidepressant drugs (without psychedelic side effects) could be developed.

Research has indicated that the neuropharmacology of ketamine is complex. It is thought that it affects the glutamate system of the brain, a system that only recently has been implicated in depression. Ketamine is an antagonist (i.e. inhibits action) at a glutamate receptor called the NMDA receptor. The inhibition of this receptor seems to cause an increase in glutamatergic activity at another receptor, known as the AMPA receptor. It is thought this secondary activity may be integral to ketamine’s quick action.

Recently a neuroimaging experiment shed some more light on how ketamine exerts its effects regionally. Researchers at the University of Manchester found that almost immediately after injection of ketamine, high levels of activity in the orbitofrontal cortex (OFC) stopped.

The OFC is thought to be involved in the regulation of affective states, and abnormal activity has been found there in depressed patients. The researchers in this study suggest it is the quick action of ketamine to quiet overactivity in the OFC that may be responsible for its rapid antidepressant effects.

Watch for more research to focus on the glutamatergic system in relation to depression. The greatest downside of antidepressant drugs today is the long time a patient must wait for them to have an effect (up to 4 weeks in many cases). Manufacturing a quick-acting antidepressant would be a boon for any pharmaceutical company, so expect them to heavily investigate the potential of glutamate-influencing drugs.

Tuesday, May 6, 2008

Ghrelin and the Omnipresence of Food

It really is difficult to travel a mile in this country without being exposed to something trying to entice you to eat. Billboards, mini-marts, and restaurants have saturated our environment with visual cues that remind us of the importance of feeding. When at home the television, radio, or internet can be helpful if one has a tendency to forget the necessity of food—especially that of the fried, dripping, or cheesy variety. The advertisers behind all of these reminders are hoping that when you encounter them, your stomach will be coincidentally flooding your hypothalamus with ghrelin.

Ghrelin is a hormone produced by the gut. You may have heard of ghrelin’s counterpart: leptin, a hormone that is integral in letting the brain know you have had enough to eat. This is important, as can be seen by looking at mice with a genetic mutation that results in an inability to produce leptin:


Ghrelin seems to play the role opposite to leptin’s, it lets you know that the stomach is getting empty and it is time to eat. Ghrelin levels are highest before a meal and lowest afterwards. When ghrelin levels are raised experimentally, people are found to eat more than those administered a placebo.

Ghrelin receptors (and leptin receptors) are especially prevalent in the hypothalamus, but a recent neuroimaging experiment shows that ghrelin may have a much more widespread effect on the brain.

A study published this week in Cell Metabolism describes the use of functional magnetic resonance imaging (fMRI) to investigate ghrelin-related brain activation. Participants were scanned while looking at food and nonfood images. Some of the subjects received an infusion of ghrelin before the fMRI.

The ghrelin injection resulted in an increase in brain activation in areas associated with evaluating the hedonic value of a stimulus—the “reward centers” of the brain, along with a large network that includes visual and memory areas. These are some of the same regions thought to be responsible for drug-seeking and other types of addictive behavior.

Thus, high levels of ghrelin may make our advertisement-laden and food-available environment a dangerous one in which to live. But the hormone may also represent a plausible method for treating obesity. Vaccines that raise an immune response against ghrelin have been shown to be effective in reducing weight gain in rats. Their use with humans is currently being investigated.

Until (and if) they are found to be effective it is best just to try to ignore the constant urgings all around us to “eat, eat, eat”.

Thursday, May 1, 2008

Gene Therapy: Struggling to Leave the Past Behind

Gene therapy is a relatively new method of treatment that involves replacing the defective allele of a gene with a functional one. The technique, originally thought to hold great potential for the treatment of genetic diseases, was at first greeted with excitement and enthusiasm. This enthusiasm continued to grow after the first successful administration of gene therapy in 1990, to improve the health of four-year old Ashanthi Desilva (born with severe combined immunodeficiency).

Since then, however, gene therapy has had its ups and downs, hitting rock bottom with the death of 19-year old Jesse Gelsinger in 1990. Gelsinger wasn’t in a life or death situation. He volunteered for the study because of a brush with death he had early in life due to a genetically inherited liver disease. He volunteered with the hopes that a cure would relieve others from suffering through some of the trials he had as a young boy. Gelsinger, however, wasn’t informed of some of the possible dangers of the treatment he was about to undergo—dangers that the scientists involved in the study were cognizant of. They neglected to tell him, and he died several days after treatment.

Since then, gene therapy has struggled to creep out from under the shadow of that dark incident. Continued successes, however, indicate that gene therapy may still have the opportunity to live up to its once heralded potential. One example is a study reported this week in the New England Journal of Medicine that describes successfully using gene therapy to restore vision in three young adults born with severe blindness.

The subjects suffer from a disease known as Leber congenital amaurosis (LCA), which usually leads to complete blindness by middle age and is thought to be caused by a mutation in a gene called retinal pigment epithelium 65 (RPE65). The gene encodes for a protein that converts vitamin A into a form that can be used by the rods and cones of the eye to make rhodopsin (a pigment that absorbs light).

The researchers injected one eye of each patient with a harmless virus carrying the healthy form of the RPE65 gene. After only two weeks, all of the participants reported improved vision in dimly lit environments. Within six weeks, some of the patients were able to read several lines of an eye chart or navigate an obstacle course—dramatic improvements over their previous levels of legal blindness. The researchers involved suggest that, due to the efficacy of this treatment, it could eventually be applied to other eye disorders, such as macular degeneration.

Every advance made in the use of gene therapy is a major one, as after the death of Jesse Gelsinger, many were quick to condemn the use of the procedure as unsafe and irresponsible. While the scientists involved in the Gelsinger debacle deserve those criticisms, the procedure itself holds great promise for understanding and ameliorating some of the worst afflictions humans face. Hopefully one day the number of lives improved and saved through the use of gene therapy will soften the sting of the egregious mistakes made in its early history.