The disturbing story of the first use of electroconvulsive therapy

Image credit: University of Liverpool Faculty of Health & Life Sciences

If you were able to glance inside a certain room on the first floor of the Clinic for Mental and Nervous Diseases in Rome on the morning of April 11th, 1938, it might have looked like a small group of physicians was about to commit a murder.

The doctors were congregated around a bed in a large, isolated laboratory, and on the bed lay a middle-aged man with a surgically-shaved head. The nervousness of the physicians would have been difficult to ignore. They were abnormally quiet—the type of uncomfortable silence that can only be created by extreme tension. Despite the cool temperature of the room, their foreheads were beaded with perspiration. One of them repeatedly walked out the door to look up and down the hallway, just to make sure no one was around.

They had good reason to be uneasy. They were preparing to send an amount of electricity that far exceeded what was considered safe at the time coursing through their patient’s brain. Indeed, at least some in the room must have feared they were about to be complicit in an execution.

On top of that, the patient hadn’t quite consented to be their guinea pig. The police had brought him into the clinic after they had found him wandering the streets of Rome in a delusional state. He was unable to provide simple information about where he was from or whether he had any family; in fact his “speech” was mostly gibberish. The police thought he was just another schizophrenic vagrant, and they probably believed they were being compassionate by bringing him somewhere he could get treatment.

But little did they know that a handful of physicians at the clinic had been waiting for a patient whose life was perhaps a bit more expendable than the rest. This man who had descended into a seemingly irreversible state of mental discord, who had no family, friends, or home to return to—he was deemed the perfect type to test an experimental, potentially life-threatening treatment on. He would be the first human to undergo what was originally called electroshock therapy.


A dangerous idea

As reckless as this might all sound, the scientists who spearheaded the experiment weren’t doing it on a whim. They had been conducting similar experiments with animals for years before building up the confidence to try the procedure with humans.

The idea had started with the director of the experiment, Ugo Cerletti. Cerletti was a respected Italian neurologist who was passionate about finding treatments for psychiatric disorders. At the time (the 1930s), mental illness was often considered irreversible, and successful therapies were difficult to come by.

Cerletti had not shied away from extreme treatments in the past. In 1937, he had begun using a stimulant drug called Cardiazol (aka Metrazol) to treat schizophrenia. When given in high doses, Cardiazol would induce seizures. This sounds like an undesirable—and potentially horrifying—side effect, but to schizophrenia researchers, it was exactly what they wanted to happen. For the thinking at the time was that there was something about the convulsions of a seizure that could counteract the effects of disorders like schizophrenia on the brain.

The use of Cardiazol to cause seizures quickly became popular, due mostly to the fact that physicians didn’t have many other options when it came to schizophrenia. But Cardiazol had a few “unpleasant” side effects. For some patients, the drug caused apprehension bordering on terror from the time it was injected until the time the seizure began. This intense dread was not simply a fear of the oncoming seizure, but seemed instead to be a psychological side effect of the medication. The behavior of patients after the seizure could be problematic as well. Some became unpredictable, irrational, and—in rare cases—even suicidal.

These factors, combined with a paucity of evidence to suggest that Cardiazol actually was having an effect that was specific to schizophrenia (it actually seemed that Cardiazol could jolt almost any patient out of a stuporous state—whether they suffered from schizophrenia, depression, mania or something else), caused Cerletti to tenaciously seek other treatments. But as he continued to test alternative therapies, he couldn’t stop thinking about electricity.

After all, everyone knew that large doses of electricity could cause convulsions. Maybe, then, electricity could also be used to induce the type of convulsions that were thought to have potential in treating schizophrenia.

When Cerletti began testing this idea on dogs, however, he realized how dangerous the approach might be: about half of the animals subjected to electrical shock died of cardiac arrest. What’s more, Cerletti’s group was using stimulation of around 125 volts to cause convulsions in dogs—and death in humans had been reported after as low as 40 volts.

Every week for almost a year, the local dog catcher dropped off an unfortunate collection of dogs at Cerletti’s lab, and the researchers there immediately began experimenting on them. They soon learned that the original placement of the electrodes (one in the mouth and one in the anus) was a large reason dogs were dying after electrical stimulation. This configuration caused the current to cross the heart, which (not surprisingly) sometimes caused cardiac arrest.

When the electrodes were moved to the head, pulses of electricity produced convulsions—but rarely death. Cerletti’s group replicated their experiments with pigs, and similarly found that electrical current applied for short periods to the head convulsed, but didn’t kill. After many tests on canine and porcine subjects, Cerletti was confident that electrical stimulation to the head was not a fatal procedure. It was time for the ultimate test: a human.


The birth of “electroshock”

Which brings us back to the morning of April 11th, 1938. Cerletti was surrounded by a small group of other physicians, a nurse, and an assistant. They had sequestered themselves in a laboratory that had a bed in it, originally installed so the director of the laboratory could rest between experiments.

But now on the bed was a homeless schizophrenic patient with a circular metal apparatus placed on his head. Wires ran from the apparatus to a device on a table nearby.

Lucio Bini—a psychiatrist who had helped to develop the electrical device being used—was watching for Cerletti’s signal to turn the machine on. Everyone else stared intently at the patient, eagerly but apprehensively waiting for something to happen.

Cerletti gave a nod, and Bini flipped the switch to send 80 volts of current surging across the patient’s temples. A flat, mechanical hum emanated from the device, and the muscles throughout the patient’s body contracted spasmodically one time, lifting him up slightly from the bed. Then, his body just as suddenly fell back down—limp, but alive. Upon questioning, the patient didn’t seem to have any recollection of what had just happened.

That was the first proof a human could tolerate this type of controlled electrical stimulation to the head. But Cerletti wasn’t satisfied. He wanted to see convulsions reminiscent of a seizure, not just one spasm. He ordered another shock be given—this time at 90 volts.

The patient’s body convulsed once again, but this spasm lasted a bit longer. The patient stopped breathing—his diaphragm remained contracted—and he began to turn pale. The asphyxia continued for a few seemingly interminable seconds, but then the patient suddenly let out a deep breath. He lay silent for about a minute, then abruptly sat up in bed and began to sing a bawdy song that was popular at the time. The song—as unusual as it was in the moment—elicited a collective sigh of relief from the experimenters, who had naturally begun to wonder if the second shock had been too much.

But again, the whole point was to see if they could prompt a seizure, not just one convulsion. Cerletti wanted to attempt the procedure one more time—with 110 volts.

At this point, according to Cerletti, some of those involved became uneasy, and urged him to stop. Someone suggested the patient be given time to rest; someone else thought it would be better to wait until the next day to continue testing. Then, the patient unexpectedly chimed in with an ominous warning: “Be careful; the first one was a nuisance, the second one was deadly.” Cerletti took in all of these recommendations and simply responded with, “Let’s proceed.”

Bini set the machine for the maximum voltage of 110 volts. When the switch was flipped, that dull humming noise briefly filled the room again. The patient’s muscles contracted in a spasm. But this time, they did not relax immediately afterward. His body began convulsing with the rhythmic shaking of a seizure.

As his body shook, his face began to turn pale due to lack of breathing. Then, it took on a bluish-purple hue—a clear sign of oxygen deprivation. Bini was timing the asphyxia with his watch. It got up to twenty seconds, then thirty….then forty. Surely many in room feared they had finally gone too far. But at 48 seconds, the patient exhaled violently and fell back to the bed—fast asleep. His vitals were normal. Cerletti declared “electroshock” safe to use on humans.


The aftermath through today

Cerletti’s group ended up giving their patient regular electroshock treatments over the next two months, and eventually they claimed he was completely cured. It turned out that he was not just a vagrant. He had a wife who had been searching desperately for him, and eventually they were reunited—providing a nice conclusion to a success story that was uncomfortably close to being a tragedy.

The use of electroshock therapy—which would eventually come to be known as electroconvulsive therapy, or ECT—spread rapidly. Over time, like any other treatment, the technique was refined, and best practices were established for “dose,” duration of the electrical impulse, and placement of the electrodes.

More substantial changes were made as well. Initially, the convulsions evoked by ECT were violent enough to sometimes cause fractures (often spinal fractures) along with other injuries. So, practitioners started administering muscle relaxing drugs before ECT to reduce the severity of the convulsions. This created another issue: the muscle relaxants temporarily induced complete paralysis, which was often terrifying for patients. Thus, physicians began using anesthesia before the procedure, which allowed patients to remain unaware of the paralysis (or any other unpleasant aspect of the period of time surrounding the seizure).

With these and other modifications, ECT today is considered a safe practice. Serious complications are rare, and memory disturbances are the most problematic side effect. Typically, these memory problems fade with time—although there have been cases where they’ve persisted and had a substantial negative effect on patients’ lives.

The safety of the procedure, however, doesn’t jibe with the perception many people still have about ECT as a dangerous, or even barbaric, method. This perception was created in large part by negative portrayals of ECT in movies and television shows—a classic example being the use of ECT as a disciplinary measure in a psychiatric hospital in the 1975 movie One Flew Over the Cuckoo’s Nest (based on Ken Kesey’s novel of the same name).

ECT has been used in an abusive and/or unscrupulous manner at times, so some of these portrayals may have a grain of truth to them. But ECT today is typically only administered with the full consent of the patient, and the procedure now is much less distressing—for the patient and observer alike—than these fictional depictions suggest.

And, although it’s still not understood how ECT might act on the brain to produce its therapeutic effects, it’s difficult to dispute that it is effective for some conditions. It didn’t end up being the remedy for schizophrenia that Cerletti had hoped (it does seem to be useful in certain cases of schizophrenia, but most studies generally find antipsychotic drugs to be more effective), but it is surprisingly effective in its most common application today: the treatment of depression.

In fact, many argue that ECT is among the most potent treatments we have for depression. A number of studies have found it to be as effective as—or more effective than—antidepressant medication, causing some to argue that it’s an extremely underutilized therapeutic approach. Regardless, a number of factors ranging from cost to its potential impact on memory cause ECT to remain more of a “last resort” for depression treatment.

Nevertheless, ECT has found its way back onto the list of respectable therapies in the eyes of most doctors and researchers. And given its somewhat ignominious beginnings as a dangerous experiment with a non-consenting patient, this is quite an achievement.


References (in addition to linked text above):

Accornero F. An Eyewitness Account of the Discovery of Electroshock. Convuls Ther. 1988;4(1):40-49.

Cerletti U. Old and new information about electroshock. Am J Psychiatry. 1950 Aug;107(2):87-94.

Payne NA, Prudic J. Electroconvulsive therapy: Part I. A perspective on the evolution and current practice of ECT. J Psychiatr Pract. 2009 Sep;15(5):346-68. doi: 10.1097/01.pra.0000361277.65468.ef.

Optograms: images from the eyes of the dead

On a cloudy fall morning in 1880, Willy Kuhne, a distinguished professor of physiology at the University of Heidelberg, waited impatiently for 31-year-old Erhard Reif to die. Reif had been found guilty of the reprehensible act of drowning his own children in the Rhine, and condemned to die by guillotine. Kuhne’s eagerness for Reif’s death, however, had nothing to do with his desire to see justice served. Instead, his impatience was mostly selfish—he had been promised the dead man’s eyes, and he planned to use them to quell a bit of scientific curiosity that had been needling him for years.

For the several years prior, Kuhne had been obsessed with eyes, and especially with the mechanism underlying the eye’s ability to create an image of the outside world. As part of this obsession, Kuhne wanted to determine once and for all the veracity of a popular belief that the human eye stores away an image of the last scene it observed before death—and that this image could then be retrieved from the retina of the deceased. Kuhne had given these images a name: optograms. He had seen evidence of them in frogs and rabbits, but had yet to verify their existence in people.

Optograms had become something of an urban legend by the time Kuhne started experimenting with them. Like most urban legends, it’s difficult to determine where this one began, but one of the earliest accounts of it can be found in an anonymous article published in London in 1857. The article claimed that an oculist in Chicago had successfully retrieved an image from the eye of a murdered man. According to the story, although the image had deteriorated in the process of separating the eye from the brain, one could still make out in it the figure of a man wearing a light coat. The reader was left to wonder whether or not the man depicted was, in fact, the murderer—and whether further refinements to the procedure could lead to a foolproof method of identifying killers by examining the eyes of their victims.

Optograms remained an intrigue in the latter half of the 19th century, but they became especially interesting to Kuhne when physiologist Franz Boll discovered a biochemical mechanism that made them plausible. Boll identified a pigmented molecule (later named rhodopsin by Kuhne) in the rod cells of the retina that was transformed from a reddish-purple color to pale and colorless upon exposure to light. At the time, much of the biology underlying visual perception was still a mystery, but we now know that the absorption of light by rhodopsin is the first step in the visual process in rod cells. It also results in something known as “bleaching,” where a change in the configuration of rhodopsin causes it to stop absorbing light until more of the original rhodopsin molecule can be produced.

In studying this effect, Boll found that the bleaching of rhodopsin could produce crude images of the environment on the retina itself. He demonstrated as much with a frog. He put the animal into a dark room, cracked the windows’ shutters just enough to allow a sliver of light in, and let the frog’s eyes focus on this thin stream of light for about ten minutes. Afterwards, Boll found an analogous streak of bleached rhodopsin running along the frog’s retina.

An optogram Kuhne retrieved from the retina of a rabbit, showing light entering the room through a seven-paned window.

Kuhne was intrigued by Boll’s research, and soon after reading about it he started his own studies on the retina. He too was able to observe optograms in the eyes of frogs, and he saw an even more detailed optogram in the eye of a rabbit. It preserved an image of light coming into the room from a seven-paned window (see picture to the right).

Kuhne worked diligently to refine his technique for obtaining optograms, but eventually decided that—despite the folklore—the procedure didn’t have any forensic potential (or even much practical use) at all. He found that the preservation of an optogram required intensive work and a great deal of luck. First, the eye had to be fixated on something and prevented from looking away from it (even after death), or else the original image would rapidly be intermingled with others and become indecipherable. Then, after death the eye had to be quickly removed from the skull and the retina chemically treated with hardening and fixing agents. This all had to be done in a race against the clock, for if the rhodopsin was able to regenerate (which could even happen soon after death) then the image would be erased and the whole effort for naught. Even if everything went exactly as planned and an optogram was successfully retrieved, it’s unclear if the level of detail within it could be enhanced enough to make the resultant image anything more than a coarse outline—and only a very rough approximation of the outside world.

Regardless, Kuhne couldn’t overlook the opportunity to examine Reif’s eyes. After all, he never did have the opportunity to see if optograms might persist in a human eye after death and—who knew—perhaps optograms in the human eye would be qualitatively different from those made in the eyes of frogs and rabbits. Maybe human optograms would be more accessible and finely detailed than he expected. Perhaps they might even be scientifically valuable.

Reif was beheaded in the town of Bruschal, a few towns over from Kuhne’s laboratory. After Reif’s death, Kuhne quickly took the decapitated head into a dimly-lit room and extracted the left eye. He prepared it using the process he had refined himself, and within 10 minutes he was looking at what he had set out to see: a human optogram.

Kuhne’s drawing of The image he saw when he examined Erhard Reif’s retina.

So was this the revolutionary discovery that would change ophthalmic and forensic science forever? Clearly not, or murder investigations would look much different today. Kuhne made a simple sketch of what he saw on Reif’s retina (reprinted to the right in the middle of the text from one of Kuhne’s papers). As you can see, it’s a bit underwhelming—certainly not the type of image that would solve any murder mysteries. It confirmed that the level of detail in a human optogram didn’t really make it worth the trouble of retrieval. Kuhne didn’t provide any explanation as to what the image might be. Of course any attempt to characterize it would amount to pure speculation, and perhaps the esteemed Heidelberg physiologist was not comfortable adding this sort of conjecture to a scientific paper.

This experience was enough to deter Kuhne from continuing to pursue the recovery of human optograms, and it seems like it would be a logical end to the fascination with optograms in general. The idea of using them to solve murders, however, reappeared periodically well into the 1900s. In the 1920s, for instance, an editorial in the New York Times critiqued a medical examiner who had neglected to take photographs of a high-profile murder victim’s eyes, suggesting that an important opportunity to retrieve an image of the murderer had been lost.

But as the 20th century wore on and our understanding of the biochemistry of visual perception became clearer, interest in optograms finally dwindled. Those who studied the eye were not convinced of their utility, and that opinion eventually persuaded the public of the same. It’s intriguing to think, though, how different our world would have been if optograms really had lived up to the hype. It certainly would have simplified some episodes of CSI.


Lanska DJ. Optograms and criminology: science, news reporting, and fanciful novels. Prog Brain Res. 2013;205:55-84. doi: 10.1016/B978-0-444-63273-9.00004-6.

History of neuroscience: Julien Jean Cesar Legallois

The idea that different parts of the nervous system are specialized for specific functions has been a pervasive concept in brain science since ancient times, perhaps best exemplified by the belief---dating back to the 4th century CE---that the four cavities of the brain known as the ventricles each were responsible for a different function, e.g. perception in the two lateral ventricles, cognition in the third ventricle, and memory in the fourth ventricle. By the early 1800s, however, there was still no definitive experimental evidence linking a particular function to a circumscribed area of the brain.

Image showing the medulla oblongata, the region of the brainstem that Legallois found was essential to respiration.

Image showing the medulla oblongata, the region of the brainstem that Legallois found was essential to respiration.

This changed with Julien Jean Cesar Legallois, a young French physician who was driven to identify the parts of the brain and body that were essential for maintaining life. The thinking at the time was that the heart and brain were both integral to life, but there was some debate about where the life-sustaining centers in the brain were located. Some, for example, considered the cerebellum to be the organ that controlled vital functions like heartbeat and respiration. Research conducted in the second half of the 18th century by the French physician Antione Charles de Lorry, however, had suggested that the area of the brain most critical to life was found in the upper spinal cord. Legallois would take Lorry's research a step further by conducting a series of gruesome experiments with rabbits that would help him to specifically pinpoint the center of vital functions in the brain.

Before detailing these experiments, it's important to mention that Legallois' studies were done at a time when the ethical treatment of animals in research---and indeed ethics in research at all---were not given much thought. Legallois was a vivisectionist, meaning that he performed surgery on living animals in his experiments. Legallois' work would not be likely to be approved by a university or research institution today, and indeed when you read Legallois' own impassive descriptions of his grisly experiments they sound like something a budding serial killer might have dreamed up before he moved on to human victims. But this was a different time, when thoughts about animal welfare were not as well formulated as they are now---and Legallois was far from the only vivisectionist of his day. Indeed, a great deal of our current neuroscience knowledge was developed using experimental methods we would consider unjustifiably cruel today. 

Legallois' method of exploring the centers of vital functions in the brain primarily involved the decapitation of rabbits. Legallois observed that after a decapitation made at certain levels of the brainstem, the headless body of a rabbit could still continue to breathe and "survive" for some time (up to five and a half hours according to Legallois). Decapitation further down the brainstem, however, would cause respiration to cease immediately. This observation was in agreement with Lorry's. Legallois then set out to isolate the particular part of the brainstem where these respiratory functions were located.

To do this, Legallois opened the skull of a young rabbit (while the rabbit was still alive), and began to remove portions of the brain---slice by slice. He found that he could remove all of the cerebrum and cerebellum and much of the brainstem, and respiration would continue. But, when he reached a particular location in the medulla oblongata---at the point of origin for the vagus nerve---respiration stopped. Thus, Legallois surmised that respiration did not depend on the whole brain but on one circumscribed area of the medulla. He concluded that the "primary seat of life" was in the medulla, not the cerebellum or cerebrum.

Legallois published the details of his seminal experiment in 1812. We now consider the medulla to be a critical area for the control of respiration as well as the regulation of heart rate, and the region is often considered to be a center of vital functions in the nervous system. Indeed, Legallois was influential in establishing the hypothesis that the brain is involved in the regulation of heart rate as well (prior hypotheses had emphasized the ability of the heart to act alone---without the influence of the brain). While Legallois was not the first to hypothesize that vital functions are localized to the medulla (he was preceded by Lorry), he was the first to provide clear experimental evidence linking the medulla to such functions, and he greatly refined Lorry's estimation of where the vital centers were located. In the process, Legallois gave us our first clear evidence that linked a function to a localized area of the brain.

Cheung T. 2013. Limits of Life and Death: Legallois's Decapitation Experiments. Journal of the History of Biology. 46: 283-313.

Finger, S. 1994. Origins of Neuroscience. New York, NY: Oxford University Press.

For more about the medulla oblongata's role in vital functions, read this article: Know your brain - Medulla oblongata