The disturbing story of the first use of electroconvulsive therapy

Image credit: University of Liverpool Faculty of Health & Life Sciences

If you were able to glance inside a certain room on the first floor of the Clinic for Mental and Nervous Diseases in Rome on the morning of April 11th, 1938, it might have looked like a small group of physicians was about to commit a murder.

The doctors were congregated around a bed in a large, isolated laboratory, and on the bed lay a middle-aged man with a surgically-shaved head. The nervousness of the physicians would have been difficult to ignore. They were abnormally quiet—the type of uncomfortable silence that can only be created by extreme tension. Despite the cool temperature of the room, their foreheads were beaded with perspiration. One of them repeatedly walked out the door to look up and down the hallway, just to make sure no one was around.

They had good reason to be uneasy. They were preparing to send an amount of electricity that far exceeded what was considered safe at the time coursing through their patient’s brain. Indeed, at least some in the room must have feared they were about to be complicit in an execution.

On top of that, the patient hadn’t quite consented to be their guinea pig. The police had brought him into the clinic after they had found him wandering the streets of Rome in a delusional state. He was unable to provide simple information about where he was from or whether he had any family; in fact his “speech” was mostly gibberish. The police thought he was just another schizophrenic vagrant, and they probably believed they were being compassionate by bringing him somewhere he could get treatment.

But little did they know that a handful of physicians at the clinic had been waiting for a patient whose life was perhaps a bit more expendable than the rest. This man who had descended into a seemingly irreversible state of mental discord, who had no family, friends, or home to return to—he was deemed the perfect type to test an experimental, potentially life-threatening treatment on. He would be the first human to undergo what was originally called electroshock therapy.

A dangerous idea

As reckless as this might all sound, the scientists who spearheaded the experiment weren’t doing it on a whim. They had been conducting similar experiments with animals for years before building up the confidence to try the procedure with humans.

The idea had started with the director of the experiment, Ugo Cerletti. Cerletti was a respected Italian neurologist who was passionate about finding treatments for psychiatric disorders. At the time (the 1930s), mental illness was often considered irreversible, and successful therapies were difficult to come by.

Cerletti had not shied away from extreme treatments in the past. In 1937, he had begun using a stimulant drug called Cardiazol (aka Metrazol) to treat schizophrenia. When given in high doses, Cardiazol would induce seizures. This sounds like an undesirable—and potentially horrifying—side effect, but to schizophrenia researchers, it was exactly what they wanted to happen. For the thinking at the time was that there was something about the convulsions of a seizure that could counteract the effects of disorders like schizophrenia on the brain.

The use of Cardiazol to cause seizures quickly became popular, due mostly to the fact that physicians didn’t have many other options when it came to schizophrenia. But Cardiazol had a few “unpleasant” side effects. For some patients, the drug caused apprehension bordering on terror from the time it was injected until the time the seizure began. This intense dread was not simply a fear of the oncoming seizure, but seemed instead to be a psychological side effect of the medication. The behavior of patients after the seizure could be problematic as well. Some became unpredictable, irrational, and—in rare cases—even suicidal.

These factors, combined with a paucity of evidence to suggest that Cardiazol actually was having an effect that was specific to schizophrenia (it actually seemed that Cardiazol could jolt almost any patient out of a stuporous state—whether they suffered from schizophrenia, depression, mania or something else), caused Cerletti to tenaciously seek other treatments. But as he continued to test alternative therapies, he couldn’t stop thinking about electricity.

After all, everyone knew that large doses of electricity could cause convulsions. Maybe, then, electricity could also be used to induce the type of convulsions that were thought to have potential in treating schizophrenia.

When Cerletti began testing this idea on dogs, however, he realized how dangerous the approach might be: about half of the animals subjected to electrical shock died of cardiac arrest. What’s more, Cerletti’s group was using stimulation of around 125 volts to cause convulsions in dogs—and death in humans had been reported after as low as 40 volts.

Every week for almost a year, the local dog catcher dropped off an unfortunate collection of dogs at Cerletti’s lab, and the researchers there immediately began experimenting on them. They soon learned that the original placement of the electrodes (one in the mouth and one in the anus) was a large reason dogs were dying after electrical stimulation. This configuration caused the current to cross the heart, which (not surprisingly) sometimes caused cardiac arrest.

When the electrodes were moved to the head, pulses of electricity produced convulsions—but rarely death. Cerletti’s group replicated their experiments with pigs, and similarly found that electrical current applied for short periods to the head convulsed, but didn’t kill. After many tests on canine and porcine subjects, Cerletti was confident that electrical stimulation to the head was not a fatal procedure. It was time for the ultimate test: a human.

The birth of “electroshock”

Which brings us back to the morning of April 11th, 1938. Cerletti was surrounded by a small group of other physicians, a nurse, and an assistant. They had sequestered themselves in a laboratory that had a bed in it, originally installed so the director of the laboratory could rest between experiments.

But now on the bed was a homeless schizophrenic patient with a circular metal apparatus placed on his head. Wires ran from the apparatus to a device on a table nearby.

Lucio Bini—a psychiatrist who had helped to develop the electrical device being used—was watching for Cerletti’s signal to turn the machine on. Everyone else stared intently at the patient, eagerly but apprehensively waiting for something to happen.

Cerletti gave a nod, and Bini flipped the switch to send 80 volts of current surging across the patient’s temples. A flat, mechanical hum emanated from the device, and the muscles throughout the patient’s body contracted spasmodically one time, lifting him up slightly from the bed. Then, his body just as suddenly fell back down—limp, but alive. Upon questioning, the patient didn’t seem to have any recollection of what had just happened.

That was the first proof a human could tolerate this type of controlled electrical stimulation to the head. But Cerletti wasn’t satisfied. He wanted to see convulsions reminiscent of a seizure, not just one spasm. He ordered another shock be given—this time at 90 volts.

The patient’s body convulsed once again, but this spasm lasted a bit longer. The patient stopped breathing—his diaphragm remained contracted—and he began to turn pale. The asphyxia continued for a few seemingly interminable seconds, but then the patient suddenly let out a deep breath. He lay silent for about a minute, then abruptly sat up in bed and began to sing a bawdy song that was popular at the time. The song—as unusual as it was in the moment—elicited a collective sigh of relief from the experimenters, who had naturally begun to wonder if the second shock had been too much.

But again, the whole point was to see if they could prompt a seizure, not just one convulsion. Cerletti wanted to attempt the procedure one more time—with 110 volts.

At this point, according to Cerletti, some of those involved became uneasy, and urged him to stop. Someone suggested the patient be given time to rest; someone else thought it would be better to wait until the next day to continue testing. Then, the patient unexpectedly chimed in with an ominous warning: “Be careful; the first one was a nuisance, the second one was deadly.” Cerletti took in all of these recommendations and simply responded with, “Let’s proceed.”

Bini set the machine for the maximum voltage of 110 volts. When the switch was flipped, that dull humming noise briefly filled the room again. The patient’s muscles contracted in a spasm. But this time, they did not relax immediately afterward. His body began convulsing with the rhythmic shaking of a seizure.

As his body shook, his face began to turn pale due to lack of breathing. Then, it took on a bluish-purple hue—a clear sign of oxygen deprivation. Bini was timing the asphyxia with his watch. It got up to twenty seconds, then thirty….then forty. Surely many in room feared they had finally gone too far. But at 48 seconds, the patient exhaled violently and fell back to the bed—fast asleep. His vitals were normal. Cerletti declared “electroshock” safe to use on humans.

The aftermath through today

Cerletti’s group ended up giving their patient regular electroshock treatments over the next two months, and eventually they claimed he was completely cured. It turned out that he was not just a vagrant. He had a wife who had been searching desperately for him, and eventually they were reunited—providing a nice conclusion to a success story that was uncomfortably close to being a tragedy.

The use of electroshock therapy—which would eventually come to be known as electroconvulsive therapy, or ECT—spread rapidly. Over time, like any other treatment, the technique was refined, and best practices were established for “dose,” duration of the electrical impulse, and placement of the electrodes.

More substantial changes were made as well. Initially, the convulsions evoked by ECT were violent enough to sometimes cause fractures (often spinal fractures) along with other injuries. So, practitioners started administering muscle relaxing drugs before ECT to reduce the severity of the convulsions. This created another issue: the muscle relaxants temporarily induced complete paralysis, which was often terrifying for patients. Thus, physicians began using anesthesia before the procedure, which allowed patients to remain unaware of the paralysis (or any other unpleasant aspect of the period of time surrounding the seizure).

With these and other modifications, ECT today is considered a safe practice. Serious complications are rare, and memory disturbances are the most problematic side effect. Typically, these memory problems fade with time—although there have been cases where they’ve persisted and had a substantial negative effect on patients’ lives.

The safety of the procedure, however, doesn’t jibe with the perception many people still have about ECT as a dangerous, or even barbaric, method. This perception was created in large part by negative portrayals of ECT in movies and television shows—a classic example being the use of ECT as a disciplinary measure in a psychiatric hospital in the 1975 movie One Flew Over the Cuckoo’s Nest (based on Ken Kesey’s novel of the same name).

ECT has been used in an abusive and/or unscrupulous manner at times, so some of these portrayals may have a grain of truth to them. But ECT today is typically only administered with the full consent of the patient, and the procedure now is much less distressing—for the patient and observer alike—than these fictional depictions suggest.

And, although it’s still not understood how ECT might act on the brain to produce its therapeutic effects, it’s difficult to dispute that it is effective for some conditions. It didn’t end up being the remedy for schizophrenia that Cerletti had hoped (it does seem to be useful in certain cases of schizophrenia, but most studies generally find antipsychotic drugs to be more effective), but it is surprisingly effective in its most common application today: the treatment of depression.

In fact, many argue that ECT is among the most potent treatments we have for depression. A number of studies have found it to be as effective as—or more effective than—antidepressant medication, causing some to argue that it’s an extremely underutilized therapeutic approach. Regardless, a number of factors ranging from cost to its potential impact on memory cause ECT to remain more of a “last resort” for depression treatment.

Nevertheless, ECT has found its way back onto the list of respectable therapies in the eyes of most doctors and researchers. And given its somewhat ignominious beginnings as a dangerous experiment with a non-consenting patient, this is quite an achievement.

References (in addition to linked text above):

Accornero F. An Eyewitness Account of the Discovery of Electroshock. Convuls Ther. 1988;4(1):40-49.

Cerletti U. Old and new information about electroshock. Am J Psychiatry. 1950 Aug;107(2):87-94.

Payne NA, Prudic J. Electroconvulsive therapy: Part I. A perspective on the evolution and current practice of ECT. J Psychiatr Pract. 2009 Sep;15(5):346-68. doi: 10.1097/01.pra.0000361277.65468.ef.

Optograms: images from the eyes of the dead

On a cloudy fall morning in 1880, Willy Kuhne, a distinguished professor of physiology at the University of Heidelberg, waited impatiently for 31-year-old Erhard Reif to die. Reif had been found guilty of the reprehensible act of drowning his own children in the Rhine, and condemned to die by guillotine. Kuhne’s eagerness for Reif’s death, however, had nothing to do with his desire to see justice served. Instead, his impatience was mostly selfish—he had been promised the dead man’s eyes, and he planned to use them to quell a bit of scientific curiosity that had been needling him for years.

For the several years prior, Kuhne had been obsessed with eyes, and especially with the mechanism underlying the eye’s ability to create an image of the outside world. As part of this obsession, Kuhne wanted to determine once and for all the veracity of a popular belief that the human eye stores away an image of the last scene it observed before death—and that this image could then be retrieved from the retina of the deceased. Kuhne had given these images a name: optograms. He had seen evidence of them in frogs and rabbits, but had yet to verify their existence in people.

Optograms had become something of an urban legend by the time Kuhne started experimenting with them. Like most urban legends, it’s difficult to determine where this one began, but one of the earliest accounts of it can be found in an anonymous article published in London in 1857. The article claimed that an oculist in Chicago had successfully retrieved an image from the eye of a murdered man. According to the story, although the image had deteriorated in the process of separating the eye from the brain, one could still make out in it the figure of a man wearing a light coat. The reader was left to wonder whether or not the man depicted was, in fact, the murderer—and whether further refinements to the procedure could lead to a foolproof method of identifying killers by examining the eyes of their victims.

Optograms remained an intrigue in the latter half of the 19th century, but they became especially interesting to Kuhne when physiologist Franz Boll discovered a biochemical mechanism that made them plausible. Boll identified a pigmented molecule (later named rhodopsin by Kuhne) in the rod cells of the retina that was transformed from a reddish-purple color to pale and colorless upon exposure to light. At the time, much of the biology underlying visual perception was still a mystery, but we now know that the absorption of light by rhodopsin is the first step in the visual process in rod cells. It also results in something known as “bleaching,” where a change in the configuration of rhodopsin causes it to stop absorbing light until more of the original rhodopsin molecule can be produced.

In studying this effect, Boll found that the bleaching of rhodopsin could produce crude images of the environment on the retina itself. He demonstrated as much with a frog. He put the animal into a dark room, cracked the windows’ shutters just enough to allow a sliver of light in, and let the frog’s eyes focus on this thin stream of light for about ten minutes. Afterwards, Boll found an analogous streak of bleached rhodopsin running along the frog’s retina.

An optogram Kuhne retrieved from the retina of a rabbit, showing light entering the room through a seven-paned window.

Kuhne was intrigued by Boll’s research, and soon after reading about it he started his own studies on the retina. He too was able to observe optograms in the eyes of frogs, and he saw an even more detailed optogram in the eye of a rabbit. It preserved an image of light coming into the room from a seven-paned window (see picture to the right).

Kuhne worked diligently to refine his technique for obtaining optograms, but eventually decided that—despite the folklore—the procedure didn’t have any forensic potential (or even much practical use) at all. He found that the preservation of an optogram required intensive work and a great deal of luck. First, the eye had to be fixated on something and prevented from looking away from it (even after death), or else the original image would rapidly be intermingled with others and become indecipherable. Then, after death the eye had to be quickly removed from the skull and the retina chemically treated with hardening and fixing agents. This all had to be done in a race against the clock, for if the rhodopsin was able to regenerate (which could even happen soon after death) then the image would be erased and the whole effort for naught. Even if everything went exactly as planned and an optogram was successfully retrieved, it’s unclear if the level of detail within it could be enhanced enough to make the resultant image anything more than a coarse outline—and only a very rough approximation of the outside world.

Regardless, Kuhne couldn’t overlook the opportunity to examine Reif’s eyes. After all, he never did have the opportunity to see if optograms might persist in a human eye after death and—who knew—perhaps optograms in the human eye would be qualitatively different from those made in the eyes of frogs and rabbits. Maybe human optograms would be more accessible and finely detailed than he expected. Perhaps they might even be scientifically valuable.

Reif was beheaded in the town of Bruschal, a few towns over from Kuhne’s laboratory. After Reif’s death, Kuhne quickly took the decapitated head into a dimly-lit room and extracted the left eye. He prepared it using the process he had refined himself, and within 10 minutes he was looking at what he had set out to see: a human optogram.

Kuhne’s drawing of The image he saw when he examined Erhard Reif’s retina.

So was this the revolutionary discovery that would change ophthalmic and forensic science forever? Clearly not, or murder investigations would look much different today. Kuhne made a simple sketch of what he saw on Reif’s retina (reprinted to the right in the middle of the text from one of Kuhne’s papers). As you can see, it’s a bit underwhelming—certainly not the type of image that would solve any murder mysteries. It confirmed that the level of detail in a human optogram didn’t really make it worth the trouble of retrieval. Kuhne didn’t provide any explanation as to what the image might be. Of course any attempt to characterize it would amount to pure speculation, and perhaps the esteemed Heidelberg physiologist was not comfortable adding this sort of conjecture to a scientific paper.

This experience was enough to deter Kuhne from continuing to pursue the recovery of human optograms, and it seems like it would be a logical end to the fascination with optograms in general. The idea of using them to solve murders, however, reappeared periodically well into the 1900s. In the 1920s, for instance, an editorial in the New York Times critiqued a medical examiner who had neglected to take photographs of a high-profile murder victim’s eyes, suggesting that an important opportunity to retrieve an image of the murderer had been lost.

But as the 20th century wore on and our understanding of the biochemistry of visual perception became clearer, interest in optograms finally dwindled. Those who studied the eye were not convinced of their utility, and that opinion eventually persuaded the public of the same. It’s intriguing to think, though, how different our world would have been if optograms really had lived up to the hype. It certainly would have simplified some episodes of CSI.

Lanska DJ. Optograms and criminology: science, news reporting, and fanciful novels. Prog Brain Res. 2013;205:55-84. doi: 10.1016/B978-0-444-63273-9.00004-6.

History of neuroscience: Julien Jean Cesar Legallois

The idea that different parts of the nervous system are specialized for specific functions has been a pervasive concept in brain science since ancient times, perhaps best exemplified by the belief---dating back to the 4th century CE---that the four cavities of the brain known as the ventricles each were responsible for a different function, e.g. perception in the two lateral ventricles, cognition in the third ventricle, and memory in the fourth ventricle. By the early 1800s, however, there was still no definitive experimental evidence linking a particular function to a circumscribed area of the brain.

Image showing the medulla oblongata, the region of the brainstem that Legallois found was essential to respiration.

Image showing the medulla oblongata, the region of the brainstem that Legallois found was essential to respiration.

This changed with Julien Jean Cesar Legallois, a young French physician who was driven to identify the parts of the brain and body that were essential for maintaining life. The thinking at the time was that the heart and brain were both integral to life, but there was some debate about where the life-sustaining centers in the brain were located. Some, for example, considered the cerebellum to be the organ that controlled vital functions like heartbeat and respiration. Research conducted in the second half of the 18th century by the French physician Antione Charles de Lorry, however, had suggested that the area of the brain most critical to life was found in the upper spinal cord. Legallois would take Lorry's research a step further by conducting a series of gruesome experiments with rabbits that would help him to specifically pinpoint the center of vital functions in the brain.

Before detailing these experiments, it's important to mention that Legallois' studies were done at a time when the ethical treatment of animals in research---and indeed ethics in research at all---were not given much thought. Legallois was a vivisectionist, meaning that he performed surgery on living animals in his experiments. Legallois' work would not be likely to be approved by a university or research institution today, and indeed when you read Legallois' own impassive descriptions of his grisly experiments they sound like something a budding serial killer might have dreamed up before he moved on to human victims. But this was a different time, when thoughts about animal welfare were not as well formulated as they are now---and Legallois was far from the only vivisectionist of his day. Indeed, a great deal of our current neuroscience knowledge was developed using experimental methods we would consider unjustifiably cruel today. 

Legallois' method of exploring the centers of vital functions in the brain primarily involved the decapitation of rabbits. Legallois observed that after a decapitation made at certain levels of the brainstem, the headless body of a rabbit could still continue to breathe and "survive" for some time (up to five and a half hours according to Legallois). Decapitation further down the brainstem, however, would cause respiration to cease immediately. This observation was in agreement with Lorry's. Legallois then set out to isolate the particular part of the brainstem where these respiratory functions were located.

To do this, Legallois opened the skull of a young rabbit (while the rabbit was still alive), and began to remove portions of the brain---slice by slice. He found that he could remove all of the cerebrum and cerebellum and much of the brainstem, and respiration would continue. But, when he reached a particular location in the medulla oblongata---at the point of origin for the vagus nerve---respiration stopped. Thus, Legallois surmised that respiration did not depend on the whole brain but on one circumscribed area of the medulla. He concluded that the "primary seat of life" was in the medulla, not the cerebellum or cerebrum.

Legallois published the details of his seminal experiment in 1812. We now consider the medulla to be a critical area for the control of respiration as well as the regulation of heart rate, and the region is often considered to be a center of vital functions in the nervous system. Indeed, Legallois was influential in establishing the hypothesis that the brain is involved in the regulation of heart rate as well (prior hypotheses had emphasized the ability of the heart to act alone---without the influence of the brain). While Legallois was not the first to hypothesize that vital functions are localized to the medulla (he was preceded by Lorry), he was the first to provide clear experimental evidence linking the medulla to such functions, and he greatly refined Lorry's estimation of where the vital centers were located. In the process, Legallois gave us our first clear evidence that linked a function to a localized area of the brain.

Cheung T. 2013. Limits of Life and Death: Legallois's Decapitation Experiments. Journal of the History of Biology. 46: 283-313.

Finger, S. 1994. Origins of Neuroscience. New York, NY: Oxford University Press.

For more about the medulla oblongata's role in vital functions, read this article: Know your brain - Medulla oblongata

History of neuroscience: Charles Scott Sherrington


To many, Charles Scott Sherrington is best known for providing us with the term synapse, a word we still use to describe the junction where two neurons communicate. While Sherrington's work to understand synapses and neural communication was important, however, his studies of reflexes, proprioception, spinal nerves, muscle action, and movement were much more expansive and probably even more influential.

Regardless, his observations concerning synapses are representative of the meticulous care with which he investigated and made deductions about the nervous system and its function. His writings on the synapse came at a time when Santiago Ramon y Cajal was beginning to convince the scientific community that the brain consists of separate nerve cells (which became known as neurons in 1891) rather than a continuous "net" of uninterrupted nerves. One thing missing from this theory was an understanding of how neurons might communicate with one another.

In writing on that issue, Sherrington proposed a specialized membrane---which he termed a synapse---that separates two nerve cells that come together. Microscopes of the day couldn't actually observe the separation found at synapses (which is minutely small), so Sherrington was forced to describe the synapse as a purely functional separation---but a separation nonetheless. He based his hypothesis on observations he made in his own research like the fact that reflexes (which he studied extensively) weren't as fast as they should be if they involved simply conducting signals along continuous nerve fibers. Sherrington had originally planned to use the term syndesm to describe the functional junction between neurons, but a friend suggested synapse, from the Greek meaning "to clasp," since it "yields a better adjectival form." 

Thus the term synapse was born, but for Sherrington his observations about the synapse were really just one part of a much greater investigation into reflexes and nerve-muscle communication. He made an important contribution in this area when he helped to elucidate the mechanism underlying the famous knee-jerk reflex (which you've likely experienced when a doctor has tapped just below your kneecap to cause your leg to kick outwards).

His work on spinal reflexes also led Sherrington to another seminal hypothesis. He proposed that muscles don't just receive innervation from nerves that travel to them from the spinal cord but that they also send sensory information about muscle length, tension, and position back to the spinal cord. Sherrington believed that this information is important for things like muscle tone and posture. He hypothesized that there are receptors in the muscle that convey this type of information, and he specifically identified muscle spindles and golgi tendon organs as potential receptors that send information about stretch and tension, respectively (this would later be confirmed). To describe the information these muscle receptors send, Sherrington coined another termproprioception. He chose this term because proprius is Latin for "own" and he wanted to emphasize that the sensory information sent from these muscle receptors comes from an individual's own body, and is not initiated by an external stimulus (as is common with other receptors).

Among Sherrington's many other contributions to understanding movement and muscle function, he also helped to develop a better understanding of the mechanism underlying something called reciprocal innervation. Reciprocal innervation refers to the way in which the activation of one muscle influences the activity of other muscles. This is a common and necessary response. As we walk across the floor, for example, when the muscles involved in the extension of one leg are activated, the muscles involved in the retraction of that same leg must be inhibited. Otherwise, our muscles would constantly be competing with one another, which would result in complete rigidity and make movement (or even standing in one place) impossible. Sherrington didn't discover the phenomenon of reciprocal innervation, but he spent years studying it and in the process gave us a better understanding of how it works. His investigations of reciprocal innervation led to a number of experiments on complex reflexes involved in movements like walking, running, and even scratching. His work helped us to understand how some reflexes involve chaining together several simple reflexive actions to create a seemingly complicated behavioral display.

Sherrington's focus on spinal nerves and reflexes led him to map the motor nerves traveling from the spinal cord to the muscles and the sensory nerves traveling from the muscles to the spinal cord---a task which took him almost ten years. He also explored the functionality of these nerves, helping to create a map of the area of the body served by a single spinal nerve (areas known as dermatomes). And he mapped the ape motor cortex, expanding on previous maps that had been made with dogs and monkeys.

Thus, although Sherrington may be best known for his naming of the synapse, his other work---which was broad but focused a great deal on muscles, movement, and reflexes---was probably even more valuable to our overall understanding of the nervous system. Sherrington won the Nobel Prize for Medicine in 1932 just as he was entering into his retirement, as recognition for his wide-ranging contributions to neuroscience. He continued to write into retirement, and branched out from scientific writing to publish a collection of poems as well as a book that focused on philosophical themes like the relationship between the mind, brain, and soul. He died in 1952 at the age of ninety-five.

Finger S. Minds Behind the Brain. New York, NY: Oxford University Press; 2000

History of neuroscience: John Hughlings Jackson

In 1860, when John Hughlings Jackson was just beginning his career as a physician, neurology did not yet exist as a medical specialty. In fact, at that time there had been little attention paid to developing a standard approach to treating patients with neurological disease. Such an approach was one of Jackson's greatest contributions to neuroscience. He advocated for examining each patient individually in an attempt to identify the biological underpinnings of neurological disorders. This examination, Jackson asserted, should be guided by the tenets of localization of function, which had been popularized by Franz Joseph Gall in the decades before Jackson was born. Concordant with these tenets, Jackson believed that neurological dysfunction could be traced back to dysfunction in specific foci of the nervous system, and the ability to identify the part of the nervous system that was affected to produce a disease was critical for making an accurate diagnosis.

Jackson's perspective on understanding neurological diseases is exemplified by his efforts to elucidate the neurobiological origins of epilepsy---the work he is probably best known for. Jackson's observations on epilepsy date back to the very beginning of his medical career. At that time, the most popular explanation for epileptic seizures was that they were associated with abnormal function in a region of the brain known as the corpus striatum, a term that refers to a composite structure consisting of the striatum and the globus pallidus. The corpus striatum was known to be involved with motor functions, which caused it to be implicated in epileptic seizures as well.

Jackson, however, began to suspect that the cerebral cortex participated in creating the convulsions that epileptics suffered from during seizures. To support this hypothesis, he cited cases where patients experienced convulsions that primarily struck one side of the body. Very often, Jackson argued, these patients upon autopsy would display damage to the cerebral hemisphere on the opposite side of the body that was affected by seizures.

Watch this 2-Minute Neuroscience video to learn more about epilepsy.

Jackson approached the idea that there were certain areas of the cortex devoted to movement with hesitancy for multiple reasons. First, at the time the prevailing view was still that the cortex was unexcitable, and thus would be unlikely to be affected by what Jackson considered to be a disease of increased excitability. Additionally, it was still common in Jackson's time to consider the cortex to be homogenous. Although the concept of localization of function was challenging this idea, many still held the belief that all gray matter in the cortex was equivalent and there were no areas of functional specialization. According to this view, the entire mass of the cortex had to act together to produce some sort of response. Jackson's idea that seizures could be linked to increased excitability in one half of the cortex did not conform to this perspective.

In addition to his observations about the link between hemispheric damage and seizures on the other side of the body, Jackson also noted a unique feature of some of the seizures he observed. He pointed out that in certain patients convulsions started in one specific area of the body and then proceeded to travel outward from that area in a predictable fashion. For example, convulsions might begin in the hand and then move up the arm to the face, and then down the same leg on the same side of the body. Or they might start in the foot and travel up the leg, then down the arm and into the hand on the same side of the body.

This process, later called the Jacksonian march, would help Jackson to formulate some of his most important ideas about the brain. He hypothesized that there were areas of the cortex that were devoted to controlling the movement of different parts of the body. When excitation spreads throughout the cortex, Jackson posited, it stimulates these different areas one by one, creating the Jacksonian march of convulsions through the patient's body. Furthermore, Jackson suggested that the parts of the body that were capable of the most diverse movements (e.g. hand, face, foot) likely had the most space in the cortex devoted to them.

With his observations on epilepsy Jackson was essentially predicting the existence of the motor cortex as well as anticipating the functional arrangement of the gray matter that the motor cortex is made up of. His hypothesis that there was a distinct region of the cerebral cortex devoted to motor function was confirmed in 1870 when Gustav Fritsch and Eduard Hitzig provided experimental evidence of a motor cortex in dogs. The arrangement Jackson envisioned, where one part of the cortex is devoted to one part of the body, we now call somatotopic arrangement. It has been verified by a series of experiments, capped by Wilder Penfield's electrical stimulation studies of the 1930s. It is now common neuroscience knowledge that there are regions of the motor cortex that seem to be devoted specifically to movement of the hands, other regions devoted to the movement of the face, and so on. As Jackson predicted, areas of the body that are involved in more diverse movements generally have more cortical area devoted to them.

Jackson's clinical observations of epilepsy and his hypotheses about the motor regions in the cortex accurately predicted what would soon be discovered through experimentation, and acted as a guide for researchers like Fritsch and Hitzig. Thus, Jackson's work contributed significantly to a better understanding of the organization of the cortex, a region that we now consider to be functionally diverse and intricately arranged---a far cry from the idea of cortical homogeneity common in Jackson's time. Additionally, Jackson's development of a more formalized methodology of observation in neurology has caused him to be considered one of the founding fathers of the field.

Jackson's contributions to neuroscience, however, were much more extensive than there is room to cover here. He wrote copiously on diverse topics ranging from the evolution of the nervous system to aphasia. At a time when our understanding of the brain was still so lacking in comparison to today, Jackson had a brilliant mind that seemed capable of comprehending brain function in a way that has rarely been replicated in the history of neuroscience.

Finger, S. Origins of Neuroscience. New York, NY: Oxford University Press; 1994.

York GK, Steinberg DA (2007). An Introduction to the Life and Work of John Hughlings Jackson. Med Hist Suppl. (26), 3-34