Saturday, December 29, 2012

How a Simple Smartphone Can Turn Your Car, Home, or Medical Device into a Deadly Weapon | Vanity Fair

Last October at Melbourne's grand Intercontinental Hotel scores of technophiles watched a researcher for IOActive, a Seattle-based computer-security firm, demonstrate an ingenious new way to kill someone—a method that one can imagine providing a sensational plot twist in an episode of Homeland.

The IOActive researcher, a man named Barnaby Jack, was so worried about the implications of his work that he intentionally obscured many of the details in his presentation. As a further precaution, he asked the attendees not to take any pictures—a tough request in a crowd full of smartphones and laptops.

Jack's work concerned pacemakers and implantable cardioverter-defibrillators (I.C.D.'s). More than three million American heart patients carry around these small, computerized devices, which monitor their heartbeat and deliver jolts of electricity to stabilize it when needed. To check and adjust these devices, many doctors use wand-like wireless programmers that they wave a few inches above patients' chests—a straightforward and seemingly safe procedure. But now, with a custom-built transmitter, Jack had discovered how to signal an I.C.D. from 30 feet away. It reacted as if the signal were in fact coming from the manufacturer's official I.C.D. programmer. Instructed by the counterfeit signal, the I.C.D. suddenly spat out 830 volts—an instantly lethal zap. Had the device been connected to an actual human heart, the fatal episode would likely have been blamed on a malfunction.

Let's face it: Barnaby Jack is a man who is quite literally looking for trouble. This is a guy who had demonstrated the year before how he could wirelessly direct an implantable insulin pump to deliver a lethal dose. The year before that, he hacked an ATM to make it spray out bills like a slot machine. But trouble-making is what he's paid to do at IOActive, and in that role he has developed a particular respect for the looming power of smartphones. Terrorists have already used cell phones to kill people in the crudest possible way: detonating explosives in Iraq and Afghanistan. But smartphones bring a new elegance to the endeavor and will bring new possibilities for mayhem into the most mundane areas of life.

The day is not far off, Jack says, when the manipulation of medical devices, for which he had needed to build special equipment, will be done routinely and remotely by punching keys on a smartphone. Indeed, in just a few minutes of online searching, I was able to find a dozen ventures developing smartphone apps for medical devices: pacemakers, defibrillators, cochlear implants, insulin pumps, cardiovascular monitors, artificial pancreases, and all the other electronic marvels doctors now are inserting into human bodies.

To engineers, the advantages are clear. Smartphones can relay patients' data to hospital computers in a continuous stream. Doctors can alter treatment regimens remotely, instead of making patients come in for a visit. If something goes wrong, medical professionals can be alerted immediately and the devices can be rapidly adjusted over the air. Unfortunately, though, the disadvantages are equally obvious to people like Barnaby Jack: doctors will not be the only people dialing in. A smartphone links patients' bodies and doctors' computers, which in turn are connected to the Internet, which in turn is connected to any smartphone anywhere. The new devices could put the management of an individual's internal organs, in the hands of every hacker, online scammer, and digital vandal on Earth.

I asked Jack if he thought anyone would actually use smartphones to try to fiddle with other people's pacemakers, or change the dosage of their medications, or compromise their eyesight, or take control of their prosthetic limbs, or raise the volume of their hearing aids to a paralyzing shriek. Will this become a tempting new way to settle a score or hurry up an inheritance? He said, "Has there ever been a box connected to the Internet that people haven't tried to break into?" He had a point: a few years ago, anonymous vandals inserted flashing animated images into an Epilepsy Foundation online forum, triggering migraines and seizure-like reactions in some unfortunate people who came across them. (The vandals were never found.) Jack was reluctant to go into detail about what he thinks the future may hold. "I'm not comfortable trying to predict exact scenarios," he said. But then he added, calm as a State Department spokesman, "I can say that I wouldn't want to discover a virus in my insulin pump."

Smartphones taking control of medical devices: the tabloid headlines write themselves. But medical devices represent only one early and obvious target of opportunity. Major power and telephone grids have long been controlled by computer networks, but now similar systems are embedded in such mundane objects as electric meters, alarm clocks, home refrigerators and thermostats, video cameras, bathroom scales, and Christmas-tree lights—all of which are, or soon will be, accessible remotely. Every automobile on the market today has scores of built-in computers, many of which can be accessed from outside the vehicle. Not only are new homes connected to the Internet but their appliances are too. "Start your coffee machine with a text message!" says a video for Electric Imp, a device created by former Gmail and iPhone employees, whose stated goal is to "apply [Internet connectivity] to any device in the world." Even children's toys have Internet addresses: for instance, you can buy an add-on wi-fi kit for your Lego robot. The spread of networking technology into every aspect of life is sometimes called "the Internet of Things."

The embrace of a new technology by ordinary people leads inevitably to its embrace by people of malign intent. Up to now, the stakes when it comes to Internet crime have been largely financial and reputational—online crooks steal money and identities but rarely can inflict physical harm. The new wave of embedded devices promises to make crime much more personal.

Consider the automobile. Surely nobody involved in the 2000 Bridgestone/Firestone scandal—a series of deadly rollovers in Ford Explorers, linked to disintegrating tires—realized that they were laying the groundwork for a possible new form of crime: carjacking-by-tire. In the aftermath of the accidents, Congress quickly toughened tire-safety regulations. Since 2007, every new car in the United States has been equipped with a tire-pressure-monitoring system, or T.P.M.S. Electronic sensors in the wheels report tire problems to an onboard computer, which flashes a warning icon on the dashboard.

By itself, the T.P.M.S. represents no great leap. Modern cars are one of the most obvious examples of the Internet of Things. It is a rare new vehicle today that contains fewer than 100 of the computers, called electronic control units, which direct and monitor every aspect of the vehicle. When drivers screech to a sudden stop, for instance, sensors in the wheels detect the slowdown and send the information to an E.C.U. If one wheel is rotating more slowly than the others—an indicator of brake lock—the E.C.U. overrides the brake and the accelerator, preventing the skid. Even as it fights the skid, the computer reaches into the seatbelt controls, tightening the straps to prevent passengers from slipping under them in case of an accident. The software for these complex, overlapping functions is formidable: as much as 100 million lines of computer code. (By contrast, Boeing's new 787 Dreamliner makes do with about 18 million lines of code.)

Many of these functions can be activated from outside. Door locks are opened by radio pulses from key fobs. G.P.S. systems are upgraded by special C.D.'s. Ignitions can be disabled by remote-controlled "immobilizers" in case of theft or repossession. Cars increasingly offer "telematics" services, such as OnStar (from General Motors), BMW Assist, MyFord Touch, and Lexus Link, that remotely diagnose engine problems, disable stolen cars, transmit text messages and phone calls, and open doors for drivers who have locked themselves out. As cars grow more sophisticated, their owners will, like computer owners, receive routine, annoying updates for the code that runs these features; Tesla, the electric-vehicle manufacturer, announced the planet's first over-the-air car-software patch in September. A security-research team from InterTrust Technologies, a company that makes protected computer systems for businesses, describes today's automobiles as full-time residents of cyberspace, scarcely distinguishable from "any other computational node, P.C., tablet, or smartphone."

The tire-pressure-monitoring system is an example. As a rule, it consists of four battery-operated sensors, one attached to the base of each tire valve. The sensors "wake up" when the wheels begin rotating. Typically, they send out minute-by-minute reports—the digital equivalent of messages like "I'm the right front tire; my pressure is 35 p.s.i."—to an E.C.U. To make sure the E.C.U. knows which tire is reporting, each sensor includes an identification number with its report. The ID is specific to that one tire. In 2010, researchers from Rutgers and the University of South Carolina discovered that they could read a tire's ID from as far away as 130 feet. This means that every car tire is, in effect, a homing device and that people 130 feet from an automobile can talk to it through its tires.

Schrader Electronics, the biggest T.P.M.S. manufacturer, publicly scoffed at the Rutgers–South Carolina report. Tracking cars by tire, it said, is "not only impractical but nearly impossible." T.P.M.S. systems, it maintained, are reliable and safe.

This is the kind of statement that security analysts regard as an invitation. A year after Schrader's sneering response, researchers from the University of Washington and the University of California–San Diego were able to "spoof" (fake) the signals from a tire-pressure E.C.U. by hacking an adjacent but entirely different system—the OnStar-type network that monitors the T.P.M.S. for roadside assistance. In a scenario from a techno-thriller, the researchers called the cell phone built into the car network with a message supposedly sent from the tires. "It told the car that the tires had 10 p.s.i. when they in fact had 30 p.s.i.," team co-leader Tadayoshi Kohno told me—a message equivalent to "Stop the car immediately." He added, "In theory, you could reprogram the car while it is parked, then initiate the program with a transmitter by the freeway. The car drives by, you call the transmitter with your smartphone, it sends the initiation code—bang! The car locks up at 70 miles per hour. You've crashed their car without touching it."

Systematically probing a "moderately priced late-model sedan with the standard options," the Washington–San Diego researchers decided to see what else they could do. They took control of the vehicle by contacting the hands-free system through the built-in cellphone and playing a special audio file. They compromised the hands-free microphone and recorded conversations in the car as it moved. They reprogrammed a mechanics' diagnostic computer to let them take over the sedan's operation remotely, at a time of their choosing. They used Bluetooth signals to start cars that were parked, locked, and alarmed. They did all this with instructions sent from a smartphone.

There was nothing to stop them. "Except for medical devices," Stuart McClure, chief technical officer of the anti-virus company McAfee, told me, "nobody regulates any of this stuff." And medical devices are regulated for safety, not security. Because government isn't wielding a cudgel, security is entirely up to the manufacturers. In McClure's view, "maybe 90 percent" of the vendors don't see security as critical. The same thing was true of computer-software companies, he pointed out. Not until credit-card numbers by the millions began to be stolen did they begin to pay attention. "We live in a reactive society," McClure went on, "and something bad has to happen before we take problems seriously. Only when these embedded computers start to kill a few people—one death won't do it—will we take it seriously."

It is a commonplace that most murders occur at home, which leads (solely for the purposes of illustration) to my own. My wife is an architect, so when we recently built a house we built one to her design. Late last spring, we moved in, hauling boxes as workers hurried to finish the last details. One day I walked into the basement to find the plumber peering in puzzlement at a device installed next to the circuit breakers. It was a white, lozenge-shaped object with a small L.E.D. panel on its face that showed a "dotted quad"—an Internet address in the form of four numbers separated by periods. "What's that?" asked the plumber. "It looks like your house is connected to the Internet."

I didn't know. The contractor didn't know, either. Nor did the cable guy or the house-alarm guy. After a few phone calls, I learned that our electric company had installed the mystery box to monitor the new solar panels on the roof. Our house—or at least our roof—was part of the Internet of Things.

The white lozenge, it turned out, was part of a "smart meter," one of the most common among a wave of new devices that will, developers hope, produce the domestic dream of a "smart home." In smart homes, residents can control their lighting, heating, air-conditioning, fire and burglar alarms, lawn sprinklers, and kitchen appliances with the touch of a button. Increasingly, that button is on a computer or smartphone. These systems can help make homes more convenient, energy efficient, and safe. They are also a point of entry for online intruders—no different, really, from an open window or an unlocked door.

Computer-security researchers are focusing attention on smart meters in part because utilities have been installing them by the millions. (The Obama stimulus bill provided $4.5 billion for "smart grid" projects; the European Union has mandated a switch-over to smart meters by 2022.) Instead of learning about energy consumption inside a home or building from meter readers in white vans, electric companies now know about power usage in real time, from streaming data provided over the Internet, letting them avoid the cascading failures that lead to blackouts. Utilities talk up the environmental benefits of smart meters—no more wasted power! Utilities are quieter about "remote disconnect"—the possibility, created by smart meters, of cutting power to nonpaying customers with the flick of a switch or the punch of a phone key.

Because smart meters register every tiny up and down in energy use, they are, in effect, monitoring every activity in the home. By studying three homes' smart-meter records, researchers at the University of Massachusetts were able to deduce not only how many people were in each dwelling at any given time but also when they were using their computers, coffee machines, and toasters. Incredibly, Kohno's group at the University of Washington was able to use tiny fluctuations in power usage to figure out exactly what movies people were watching on their TVs. (The play of imagery on the monitor creates a unique fingerprint of electromagnetic interference that can be matched to a database of such fingerprints.)

Like the computer on my home-office desk, the smart-meter computer in my basement is vulnerable to viruses, worms, and other Internet perils. As long ago as 2009, Mike Davis of IOActive was able to infect smart meters with virus-like code. The infected meters could then spread the malware to other, nearby meters. In theory, smart-meter viruses could black out entire neighborhoods at a stroke. They could also ripple back and infect the central controls at utility companies. Because those utility networks are usually decades old, they often lack basic security features, such as firewalls and anti-virus protection. "If I'm a bad guy, I'll wait till there's a major snowstorm or heat wave," said McClure. "Then kill the heat or A/C." Under such circumstances, he observed, "the elderly die very easily."

For average homeowners like me, smart meters are almost as invisible as their risks. We're much more aware of the new temperature, security, and lighting controls operated by smartphones or tablets. (In September, the big real-estate developer Taylor Morrison announced a nationwide rollout of "interactive home" that include front-door video monitoring, whole-house Internet audio integrated with iTunes, and remotely programmable lighting and appliances.) Just around the corner, according to tech analysts, are refrigerators that alert families when they've run out of milk, ovens that can be turned on from the office, counters that double as video displays for recipes, videos, or Skype chats, and sensors that detect when residents are ill or hurt and that automatically call 911.

In the rush to put computers into everything, neither manufacturers nor consumers think about the possible threats. "I would be shocked if a random parent at Toys R Us picked up a toy with a wireless connection and thought, I wonder if there are any security problems here." Kohno said to me. As he has himself demonstrated, children's Erector Sets with Web cams can be taken over remotely and used for surveillance. Kohno added, "I just hope you can't use them to turn on the broiler and set the house on fire." It was meant as joking hyperbole. But you won't need an Erector Set to physically turn on the broiler. Smartphone apps will do that for you. And when that's done—what the heck—you can kill the power, disable the fire alarm, suppress the call to 911, and for good measure start the car and leave it running in the garage.

Today, of course, these threats are remote. Only experts like Kohno can digitally hijack a house. But it is the nature of software to get easier to use and more widely available. Creating the first Internet worm required months of work in the late 1980s by a brilliant computer-science student, Robert T. Morris, who is now a professor at M.I.T. Today "virus construction kits" are readily downloadable on the Web, intended for teenaged miscreants with little programming ability. The expertise and time required for this type of vandalism have steadily declined. As a result, Internet threats have steadily risen. As I researched this article, every single computer-security expert I spoke with said they expected precisely the same pattern—obscure and rare to common and ubiquitous—to hold for the Internet of Things.

More than 1.5 million external defibrillators—flat, plastic devices that deliver shocks to people in cardiac arrest—have been installed in American offices, malls, airports, restaurants, hotels, stadiums, schools, health clubs, and, of course, hospital wards. (Usually bright red or yellow, they are typically mounted in boxes that look a bit like big fire alarms.) A.E.D.'s, as they are called, administer shocks through two pads taped to patients' chests that also monitor their heartbeats. Many have the ability to simultaneously call 911 when they are used. A.E.D.'s are, in fact, computers, and most of them are updated with Windows-based software on a U.S.B. stick.

Last year, Kevin Fu of the University of Massachusetts and five other researchers decided to find out whether an A.E.D. could be hacked. They discovered four separate methods for subverting the apparatus, two of which would allow the A.E.D.'s to be used as a portal for taking over nearby hospital computers.

In a way, Fu told me, using A.E.D.'s to hijack hospital computers was "irrelevant," because computers are often already compromised by other means. Critically important devices like the fetal monitors for women with high-risk pregnancies can be so burdened with malware they no longer function. "I remember one computer in a radiology room that was absolutely riddled with viruses because the surgeons and nurses checked their e-mail on it," Fu said. "And it was the computer that ran the radiology equipment." Why didn't people check e-mail on a separate computer? "They said there wasn't enough room on the table for two machines," he said.

Even when staffers aren't careless, hospital-security problems can be difficult to fix. Medical manufacturers, Fu said, frequently will not allow hospitals to modify their software—even just to add anti-virus protection—because they fear that the changes would have to be reviewed by the U.S. Food and Drug Administration, a complex and expensive process. The fear is wholly justified; according to the F.D.A., most medical-device software problems are linked to updates, patches, and revisions.

Hospital equipment like external defibrillators and fetal monitors can at least be picked up, taken apart, or carted away. Implanted devices—equipment surgically implanted into the body—are vastly more difficult to remove but not all that much harder to attack.

You don't even have to know anything about medical devices' software to attack them remotely, Fu says. You simply have to call them repeatedly, waking them up so many times that they exhaust their batteries—a medical version of the online "denial of service" attack, in which botnets overwhelm Web sites with millions of phony messages. On a more complex level, pacemaker-subverter Barnaby Jack has been developing Electric Feel, software that scans for medical devices in crowds, compromising all within range. Although Jack emphasizes that Electric Feel "was created for research purposes, in the wrong hands it could have deadly consequences." (A General Accounting Office report noted in August that Uncle Sam had never systematically analyzed medical devices for their hackability, and recommended that the F.D.A. take action.)

Some 20 million Americans today carry implanted medical devices of some kind. As the population ages, that number will only grow, as will the percentage of those devices that are accessible by smartphone. So will the number of connected smart homes. Possibly people will own versions of Google's driverless car, in which all navigation is controlled by computers and sensors—devices that a hacker with a smartphone can adjust with satisfactorily grim results. If Ridley Scott, say, were to attempt a remake of Dial M for Murder, I'm not sure he'd know where to begin.

"In 10 years," Kohno told me, "computers will be everywhere we look, and they'll all have wireless. Will you be able to compromise someone's insulin pump through their car? Will you be able to induce seizures by subverting their house lights? Will you be able to run these exploits by cell phone? What's possible? It's more like 'What won't be possible?'"

A Circus of Pain -


It was a cool fall day, but the sun seemed extremely bright as the young man helped guide nine circus elephants to their new pens. Even though the man was wearing sunglasses, the morning sun reflecting off the metal equipment felt like a knife cutting into his right eye. His head throbbed behind the eye, and an occasional tear rolled down his cheek. When the animals were finally secured, he returned to his trailer. ''O.K., I do need a doctor,'' he said to his girlfriend. His hand was cupped over the side of his face. ''Right now.''


Ken Orvidas

It was the worst headache of his life, the 25-year-old patient told the doctor in the emergency room of Highland Hospital in Rochester. It started five days earlier when the circus was in Connecticut. At first it wasn't a big deal. He would take a couple of aspirin, and it would disappear. But when the medicine wore off, the headache was still there. In fact, each time it seemed just a little worse. That morning, when he got out of bed, the pain was unbearable. He took aspirin, Advil, Tylenol. Nothing put a dent in it.The pain was sharp and on the right. It felt as if someone were slamming a door inside his head. He'd had the occasional headache but never something like this.

He didn't smoke, rarely drank and took no medications. He had no recent head trauma, though he was head-butted by a zebra a few years ago. That hurt — it broke his glasses — but not this much. His mother had migraines, and perhaps that's what this was. Maybe, the doctor said, though a week was a long time for a migraine.


For doctors, a description of a headache as the worst is a red flag. We worry about headaches described as the first (for someone who doesn't have headaches) or the worst (for someone who does) or those that are ''cursed'' by the presence of other symptoms like weakness or confusion. He didn't have other symptoms, but the doctor was concerned because he called it the worst.


The doctor ordered a painkiller and blood tests to look for signs of infection or inflammation. She also ordered a CT scan of the head to look for a tumor or evidence of blood. The blood tests were normal. The CT was not.

Within the brain, there are compartments where spinal fluid is made. The fluid then circulates around the brain and spinal cord and is reabsorbed. Two of these compartments, known as the lateral ventricles, are usually mirror images of each other. But in this patient, the ventricle on the right, where his headache was located, was much larger than the one on the left. That suggested there might be a blockage in the circulation of the spinal fluid on the right side, which was causing pressure to build.That could certainly cause a headache — and permanent damage if not addressed quickly.


A slide from the CT scan of the patient's head.
Even before the E.R. doctor saw the CT scan, she called neurology for help in figuring out this patient's terrible headache. The neurology resident examined the patient and his CT scan, but it wasn't clear to him how the pieces fit together. If the asymmetry were caused by an obstruction, the patient should have symptoms associated with increased brain pressure — like nausea — but he didn't. The resident knew that he didn't have enough data to make a diagnosis. Watching the patient over time would give him more. If there was a blockage in his brain, he should begin to feel nauseated and weak. If he didn't, it was very unlikely that the asymmetry reflected a blockage. The patient was admitted to the hospital, where nurses were to examine him every four hours to look for any change.


Overnight the headache became worse, despite the use of several powerful painkillers. By morning the patient was exhausted from the pain and nearly incoherent from the narcotics. He never, however, developed symptoms of increased pressure in his brain.The neurologist speculated that this was a migraine and recommended he go home and follow up as an outpatient.

The neurosurgeons weren't so sure there wasn't an obstruction.The patient's worsening pain was worrisome. They recommended an M.R.I. If there was a change in the size of the ventricle, when compared with the CT, they could drill a small hole into his skull and relieve the pressure.


Dr. Bilal Ahmed, the internist taking over the patient's care that morning, first heard about the new patient from his team of residents outside the patient's door. They told him that he was a young circus worker who had been hit in the head by a zebra, had an abnormal CT and was probably going to surgery later in the day.

As they stood there, a nurse hurried out of the patient's room. ''He's got a rash,'' she told the doctors. The team went into the room, and Dr. Ahmed glanced at the patient now hidden beneath a pile of blankets. He introduced himself to the patient's girlfriend. As she started to speak, Dr. Ahmed held a finger to his lips. ''Don't say anything,'' he told her. ''I want to see for myself.''

''May I look?'' he asked the young man. A matted head of dark curls slowly emerged from beneath the mound of blankets. The patient sat up slowly, blinking in the dim light. His right eyelid was swollen and drooped drunkenly over the pupil so that only the lower ridge of the greenish brown iris was visible. The right side of his forehead was red, as if he had a sunburn on that half of his face. And there was a sprinkling of bumps over his eye and forehead.

Was this zoster? Dr. Ahmed wondered out loud. He touched the reddened skin around the lesions.The young man winced.That part of his forehead had been intensely sensitive ever since this headache started.


Herpes zoster — or shingles — is the re-emergence of the herpes virus that causes chickenpox. The word ''shingles'' comes from the Latin ''cingulum,'' which means ''belt'' or ''girdle''; the rash of herpes zoster often appears in a band, usually on the trunk or chest. When a chickenpox infection resolves, the virus takes refuge in branches of the nerves just outside the spinal cord, where it usually resides for decades. Sometimes the virus re-emerges, but the reasons are unclear. Most of these outbreaks are painful but not dangerous — except when the virus emerges in the nerves near the eyes.


Dr. Ahmed called the neurosurgeon. Was there a link between this patient's shingles and the asymmetric ventricles? No, he was told. If this guy has shingles — and it sounded as if he did — then the asymmetry was probably something he was born with.The M.R.I., done later that day, confirmed that there was no obstruction. In the meantime, the patient was started on an antiviral medication. Despite the treatment, his vision began to blur. The bumps on his face, which led to the diagnosis, had spread to his eye as well. Two years later, his vision is still impaired on that side.


In this case, as in so many, time is a powerful and frequently undervalued diagnostic tool. The rash appeared days after the symptoms began; that is common in zoster. But without the telltale rash, there was only the pain and the abnormal CT, and that led his doctors to worry that his pain was the result of pressure building up in his brain. A truism in medicine is that when we hear hoof beats we should think of ordinary horses as the cause rather than the rare zebra. In this case, time revealed that what looked likely to be zebra — an obstruction on the right side of the brain — was actually the everyday horse of herpes zoster.

Neuroscience - Under Attack -

This fall, science writers have made sport of yet another instance of bad neuroscience. The culprit this time is Naomi Wolf; her new book, "Vagina," has been roundly drubbed for misrepresenting the brain and neurochemicals like dopamine and oxytocin.

Earlier in the year, Chris Mooney raised similar ire with the book "The Republican Brain," which claims that Republicans are genetically different from — and, many readers deduced, lesser to — Democrats. "If Mooney's argument sounds familiar to you, it should," scoffed two science writers. "It's called 'eugenics,' and it was based on the belief that some humans are genetically inferior."

Sharp words from disapproving science writers are but the tip of the hippocampus: today's pop neuroscience, coarsened for mass audiences, is under a much larger attack.

Meet the "neuro doubters." The neuro doubter may like neuroscience but does not like what he or she considers its bastardization by glib, sometimes ill-informed, popularizers.

A gaggle of energetic and amusing, mostly anonymous, neuroscience bloggers — including Neurocritic, Neuroskeptic, Neurobonkers and Mind Hacks — now regularly point out the lapses and folly contained in mainstream neuroscientific discourse. This group, for example, slammed a recent Newsweek article in which a neurosurgeon claimed to have discovered that "heaven is real" after his cortex "shut down." Such journalism, these critics contend, is "shoddy," nothing more than "simplified pop." Additionally, publications from The Guardian to the New Statesman have published pieces blasting popular neuroscience-dependent writers like Jonah Lehrer and Malcolm Gladwell. The Oxford neuropsychologist Dorothy Bishop's scolding lecture on the science of bad neuroscience was an online sensation last summer.

As a journalist and cultural critic, I applaud the backlash against what is sometimes called brain porn, which raises important questions about this reductionist, sloppy thinking and our willingness to accept seemingly neuroscientific explanations for, well, nearly everything.

Voting Republican? Oh, that's brain chemistry. Success on the job? Fortuitous neurochemistry! Neuroscience has joined company with other totalizing worldviews — Marxism, Freudianism, critical theory — that have been victim to overuse and misapplication.

A team of British scientists recently analyzed nearly 3,000 neuroscientific articles published in the British press between 2000 and 2010 and found that the media regularly distorts and embellishes the findings of scientific studies. Writing in the journal Neuron, the researchers concluded that "logically irrelevant neuroscience information imbues an argument with authoritative, scientific credibility." Another way of saying this is that bogus science gives vague, undisciplined thinking the look of seriousness and truth.

The problem isn't solely that self-appointed scientists often jump to faulty conclusions about neuroscience. It's also that they are part of a larger cultural tendency, in which neuroscientific explanations eclipse historical, political, economic, literary and journalistic interpretations of experience. A number of the neuro doubters are also humanities scholars who question the way that neuroscience has seeped into their disciplines, creating phenomena like neuro law, which, in part, uses the evidence of damaged brains as the basis for legal defense of people accused of heinous crimes, or neuroaesthetics, a trendy blend of art history and neuroscience.

It's not hard to understand why neuroscience is so appealing. We all seek shortcuts to enlightenment. It's reassuring to believe that brain images and machine analysis will reveal the fundamental truth about our minds and their contents. But as the neuro doubters make plain, we may be asking too much of neuroscience, expecting that its explanations will be definitive. Yet it's hard to imagine that any functional magnetic resonance imaging or chemical map will ever explain "The Golden Bowl" or heaven. Or that brain imaging, no matter how sophisticated and precise, will ever tell us what women really want.

Alissa Quart is the author of "Branded" and "Hothouse Kids."

Friday, December 28, 2012

The Wilson Quarterly: Beyond the Brain

In the 1990s, scientists declared that schizophrenia and other psychiatric illnesses were pure brain disorders that would eventually yield to drugs. Now they are recognizing that social factors are among the causes, and must be part of the cure.
By the time I met her, Susan was a success story. She was a student at the local community college. She had her own apartment, and she kept it in reasonable shape. She did not drink, at least not much, and she did not use drugs, if you did not count marijuana. She was a big, imposing black woman who defended herself aggressively on the street, but she had not been jailed for years. All this was striking because Susan clearly met criteria for a diagnosis of schizophrenia, the most severe and debilitating of psychiatric disorders. She thought that people listened to her through the heating pipes in her apartment. She heard them muttering mean remarks. Sometimes she thought she was part of a government experiment that was beaming rays on black people, a kind of technological Tuskegee. She felt those rays pressing down so hard on her head that it hurt. Yet she had not been hospitalized since she got her own apartment, even though she took no medication and saw no psychiatrists. That apartment was the most effective antipsychotic she had ever taken.
Twenty years ago, most psychiatrists would have agreed that Susan had a brain disorder for which the only reasonable treatment was medication. They had learned to reject the old psychoanalytic ideas about schizophrenia, and for good reasons. When psychoanalysis dominated American psychiatry, in the mid-20th century, clinicians believed that this terrible illness, with its characteristic combination of hallucinations (usually auditory), delusions, and deterioration in work and social life, arose from the patient's own emotional conflict. Such patients were unable to reconcile their intense longing for intimacy with their fear of closeness. The science mostly blamed the mother. She was "schizophrenogenic." She delivered conflicting messages of hope and rejection, and her ambivalence drove her child, unable to know what was real, into the paralyzed world of madness. It became standard practice in American psychiatry to regard the mother as the cause of the child's psychosis, and standard practice to treat schizophrenia with psychoanalysis to counteract her grim influence. The standard practice often failed.
The 1980s saw a revolution in psychiatric science, and it brought enormous excitement about what the new biomedical approach to serious psychiatric illness could offer to patients like Susan. To signal how much psychiatry had changed since its tweedy psychoanalytic days, the National Institute of Mental Health designated the 1990s as the "decade of the brain." Psychoanalysis and even psychotherapy were said to be on their way out. Psychiatry would focus on real disease, and psychiatric researchers would pinpoint the biochemical causes of illness and neatly design drugs to target them.
Schizophrenia became a poster child for the new approach, for it was the illness the psychoanalysis of the previous era had most spectacularly failed to cure. Psychiatrists came to see the assignment of blame to the schizophrenogenic mother as an unforgivable sin. Such mothers, they realized, had not only been forced to struggle with losing a child to madness, but with the self-denigration and doubt that came from being told that they had caused the misery in the first place. The pain of this mistake still reverberates through the profession. In psychiatry it is now considered not only incorrect but morally wrong to see the parents as responsible for their child's illness. I remember talking to a young psychiatrist in the late 1990s, back when I was doing an anthropological study of psychiatric training. I asked him what he would want non-psychiatrists to know about psychiatry. "Tell them," he said, "that schizophrenia is no one's fault."     
It is now clear that the simple biomedical approach to serious psychiatric illnesses has failed in turn. At least, the bold dream that these maladies would be understood as brain disorders with clearly identifiable genetic causes and clear, targeted pharmacological interventions (what some researchers call the bio-bio-bio model, for brain lesion, genetic cause, and pharmacological cure) has faded into the mist. To be sure, it would be too strong to say that we should no longer think of schizophrenia as a brain disease. One often has a profound sense, when confronted with a person diagnosed with schizophrenia, that something has gone badly wrong with the brain.
Yet the outcome of two decades of serious psychiatric science is that schizophrenia now appears to be a complex outcome of many unrelated causes—the genes you inherit, but also whether your mother fell ill during her pregnancy, whether you got beaten up as a child or were stressed as an adolescent, even how much sun your skin has seen. It's not just about the brain. It's not just about genes. In fact, schizophrenia looks more and more like diabetes. A messy array of risk factors predisposes someone to develop diabetes: smoking, being overweight, collecting fat around the middle rather than on the hips, high blood pressure, and yes, family history. These risk factors are not intrinsically linked. Some of them have something to do with genes, but most do not. They hang together so loosely that physicians now speak of a metabolic "syndrome," something far looser and vaguer than an "illness," let alone a "disease." Psychiatric researchers increasingly think about schizophrenia in similar terms.
And so the schizophrenogenic mother is back. Not in the flesh, perhaps. Few clinicians talk anymore about cold, rejecting mothers—"refrigerator" mothers, to use the old psychoanalytic tag. But they talk about stress and trauma and culture. They talk about childhood adversity—being beaten, bullied, or sexually abused, the kind of thing that the idea of the schizophrenogenic mother was meant to capture, though in the new research the assault is physical and the abuser is likely male. Clinicians recognize that having a decent place to live is sometimes more important than medication. Increasingly, the valuable research is done not only in the laboratory but in the field, by epidemiologists and even anthropologists. What happened?
The first reason the tide turned is that the newer, targeted medications did not work very well. It is true that about a third of those who take antipsychotics improve markedly. But the side effects of antipsychotics are not very pleasant. They can make your skin crawl as if ants were scuttling underneath the surface. They can make you feel dull and bloated. While they damp down the horrifying hallucinations that can make someone's life a misery—harsh voices whispering "You're stupid" dozens of times a day, so audible that the sufferer turns to see who spoke—it is not as if the drugs restore most people to the way they were before they fell sick. Many who are on antipsychotic medication are so sluggish that they are lucky if they can work menial jobs.
Some of the new drugs' problems could be even more serious. For instance, when clozapine was first released in the United States in 1989, under the brand name Clozaril, headlines announced a new era in the treatment of psychiatric illness. Observers described dramatic remissions that unlocked the prison cage created by the schizophrenic mind, returning men and women to themselves. Clozaril also carried the risk of a strange side effect: In some cases, blood molecules would clump together and the patient would die. Consequently, those who took the drug had to be monitored constantly, their blood drawn weekly, their charts reviewed. Clozaril could cost $9,000 per year. But it was meant to set the mind free.
Yet Clozaril turned out not to be a miracle drug, at least for most of those who took it. Two decades after its release, a reanalysis published in The Archives of General Psychiatry found that on average, the older antipsychotics—such as Thorazine, mocked in the novel One Flew Over the Cuckoo's Nest for the fixed, glassy stares it produced in those who took it—worked as well as the new generation, and at a fraction of the cost. Then there was more bad news, which washed like a tidal wave across the mental health world in the late 1990s, as if the facts had somehow been hidden from view. These new antipsychotics caused patients to gain tremendous amounts of weight. On average, people put on 10 pounds in their first 10 weeks on Clozaril. They could gain a hundred pounds in a year. It made them feel awful. I remember a round young woman whose eyes suddenly filled with tears as she told me she once had been slender.
The weight not only depressed people. It killed them. People with schizophrenia die at a rate far higher than that of the general population, and most of that increase is not due to suicide. In a now famous study of patients on Clozaril, more than a third developed diabetes in the first five years of use alone.
The second reason the tide turned against the simple biomedical model is that the search for a genetic explanation fell apart. Genes are clearly involved in schizophrenia. The child of someone with schizophrenia has a tenfold increase in the risk of developing the disorder; the identical twin of someone with schizophrenia has a one-in-two chance of falling ill. By contrast, the risk that a child of someone with Huntington's chorea—a terrible convulsive disorder caused by a single inherited gene—will go on to develop the disease goes up by a factor of 10,000. If you inherit the gene, you will die of the disease.
Schizophrenia doesn't work like that. The effort to narrow the number of genes that may play a role has been daunting. A leading researcher in the field, Ridha Joober, has argued that there are so many genes involved, and the effects of any one gene are so small, that the serious scientist working in the field should devote his or her time solely to identifying genes that can be shown not to be relevant. The number of implicated genes is so great that Schizophrenia Forum, an excellent Web site devoted to organizing the scientific research on the disorder—the subject of 50,000 published articles in the last two decades—features what Joober has called a "gene of the week" section. Another scientist, Robin Murray, one of the most prominent schizophrenia researchers in Europe, has pointed out that you can now track the scientific status of a gene the way you follow the performance of a sports team. He said he likes to go online to the Schizophrenia Forum to see how his favorite genes are faring.
The third reason for the pushback against the biomedical approach is that a cadre of psychiatric epidemiologists and anthropologists has made clear that culture really matters. In the early days of the biomedical revolution, when schizophrenia epitomized the pure brain disorder, the illness was said to appear at the same rate around the globe, as if true brain disease respected no social boundaries and was found in all nations, classes, and races in equal measure. This piece of dogma was repeated with remarkable confidence from textbook to textbook, driven by the fervent anti-psychoanalytic insistence that the mother was not to blame. No one should ever have believed it. As the epidemiologist John McGrath dryly remarked, "While the notion that schizophrenia respects human rights is vaguely ennobling, it is also frankly bizarre." In recent years, epidemiologists have been able to demonstrate that while schizophrenia is rare everywhere, it is much more common in some settings than in others, and in some societies the disorder seems more severe and unyielding. Moreover, when you look at the differences, it is hard not to draw the conclusion that there is something deeply social at work behind them.
Schizophrenia has a more benign course and outcome in the developing world. The best data come from India. In the study that established the difference, researchers looking at people two years after they first showed up at a hospital for care found that they scored significantly better on most outcome measures than a comparable group in the West. They had fewer symptoms, took less medication, and were more likely to be employed and married. The results were dissected, reanalyzed, then replicated—not in a tranquil Hindu village, but in the chaotic urban tangle of modern Chennai. No one really knows why Indian patients did so well, but increasingly, psychiatric scientists are willing to attribute the better outcomes to social factors. For one thing, families are far more involved in the ill person's care in India. They come to all the appointments, manage the medications, and allow the patients to live with them indefinitely. Compared to Europeans and Americans, they yell at the patients less.
More ...