Saturday, September 22, 2012

The Wilson Quarterly: Beyond the Brain

In the 1990s, scientists declared that schizophrenia and other psychiatric illnesses were pure brain disorders that would eventually yield to drugs. Now they are recognizing that social factors are among the causes, and must be part of the cure.
By the time I met her, Susan was a success story. She was a student at the local community college. She had her own apartment, and she kept it in reasonable shape. She did not drink, at least not much, and she did not use drugs, if you did not count marijuana. She was a big, imposing black woman who defended herself aggressively on the street, but she had not been jailed for years. All this was striking because Susan clearly met criteria for a diagnosis of schizophrenia, the most severe and debilitating of psychiatric disorders. She thought that people listened to her through the heating pipes in her apartment. She heard them muttering mean remarks. Sometimes she thought she was part of a government experiment that was beaming rays on black people, a kind of technological Tuskegee. She felt those rays pressing down so hard on her head that it hurt. Yet she had not been hospitalized since she got her own apartment, even though she took no medication and saw no psychiatrists. That apartment was the most effective antipsychotic she had ever taken.
Twenty years ago, most psychiatrists would have agreed that Susan had a brain disorder for which the only reasonable treatment was medication. They had learned to reject the old psychoanalytic ideas about schizophrenia, and for good reasons. When psychoanalysis dominated American psychiatry, in the mid-20th century, clinicians believed that this terrible illness, with its characteristic combination of hallucinations (usually auditory), delusions, and deterioration in work and social life, arose from the patient's own emotional conflict. Such patients were unable to reconcile their intense longing for intimacy with their fear of closeness. The science mostly blamed the mother. She was "schizophrenogenic." She delivered conflicting messages of hope and rejection, and her ambivalence drove her child, unable to know what was real, into the paralyzed world of madness. It became standard practice in American psychiatry to regard the mother as the cause of the child's psychosis, and standard practice to treat schizophrenia with psychoanalysis to counteract her grim influence. The standard practice often failed.
The 1980s saw a revolution in psychiatric science, and it brought enormous excitement about what the new biomedical approach to serious psychiatric illness could offer to patients like Susan. To signal how much psychiatry had changed since its tweedy psychoanalytic days, the National Institute of Mental Health designated the 1990s as the "decade of the brain." Psychoanalysis and even psychotherapy were said to be on their way out. Psychiatry would focus on real disease, and psychiatric researchers would pinpoint the biochemical causes of illness and neatly design drugs to target them.
Schizophrenia became a poster child for the new approach, for it was the illness the psychoanalysis of the previous era had most spectacularly failed to cure. Psychiatrists came to see the assignment of blame to the schizophrenogenic mother as an unforgivable sin. Such mothers, they realized, had not only been forced to struggle with losing a child to madness, but with the self-denigration and doubt that came from being told that they had caused the misery in the first place. The pain of this mistake still reverberates through the profession. In psychiatry it is now considered not only incorrect but morally wrong to see the parents as responsible for their child's illness. I remember talking to a young psychiatrist in the late 1990s, back when I was doing an anthropological study of psychiatric training. I asked him what he would want non-psychiatrists to know about psychiatry. "Tell them," he said, "that schizophrenia is no one's fault."     
It is now clear that the simple biomedical approach to serious psychiatric illnesses has failed in turn. At least, the bold dream that these maladies would be understood as brain disorders with clearly identifiable genetic causes and clear, targeted pharmacological interventions (what some researchers call the bio-bio-bio model, for brain lesion, genetic cause, and pharmacological cure) has faded into the mist. To be sure, it would be too strong to say that we should no longer think of schizophrenia as a brain disease. One often has a profound sense, when confronted with a person diagnosed with schizophrenia, that something has gone badly wrong with the brain.
Yet the outcome of two decades of serious psychiatric science is that schizophrenia now appears to be a complex outcome of many unrelated causes—the genes you inherit, but also whether your mother fell ill during her pregnancy, whether you got beaten up as a child or were stressed as an adolescent, even how much sun your skin has seen. It's not just about the brain. It's not just about genes. In fact, schizophrenia looks more and more like diabetes. A messy array of risk factors predisposes someone to develop diabetes: smoking, being overweight, collecting fat around the middle rather than on the hips, high blood pressure, and yes, family history. These risk factors are not intrinsically linked. Some of them have something to do with genes, but most do not. They hang together so loosely that physicians now speak of a metabolic "syndrome," something far looser and vaguer than an "illness," let alone a "disease." Psychiatric researchers increasingly think about schizophrenia in similar terms.
And so the schizophrenogenic mother is back. Not in the flesh, perhaps. Few clinicians talk anymore about cold, rejecting mothers—"refrigerator" mothers, to use the old psychoanalytic tag. But they talk about stress and trauma and culture. They talk about childhood adversity—being beaten, bullied, or sexually abused, the kind of thing that the idea of the schizophrenogenic mother was meant to capture, though in the new research the assault is physical and the abuser is likely male. Clinicians recognize that having a decent place to live is sometimes more important than medication. Increasingly, the valuable research is done not only in the laboratory but in the field, by epidemiologists and even anthropologists. What happened?
The first reason the tide turned is that the newer, targeted medications did not work very well. It is true that about a third of those who take antipsychotics improve markedly. But the side effects of antipsychotics are not very pleasant. They can make your skin crawl as if ants were scuttling underneath the surface. They can make you feel dull and bloated. While they damp down the horrifying hallucinations that can make someone's life a misery—harsh voices whispering "You're stupid" dozens of times a day, so audible that the sufferer turns to see who spoke—it is not as if the drugs restore most people to the way they were before they fell sick. Many who are on antipsychotic medication are so sluggish that they are lucky if they can work menial jobs.
Some of the new drugs' problems could be even more serious. For instance, when clozapine was first released in the United States in 1989, under the brand name Clozaril, headlines announced a new era in the treatment of psychiatric illness. Observers described dramatic remissions that unlocked the prison cage created by the schizophrenic mind, returning men and women to themselves. Clozaril also carried the risk of a strange side effect: In some cases, blood molecules would clump together and the patient would die. Consequently, those who took the drug had to be monitored constantly, their blood drawn weekly, their charts reviewed. Clozaril could cost $9,000 per year. But it was meant to set the mind free.
Yet Clozaril turned out not to be a miracle drug, at least for most of those who took it. Two decades after its release, a reanalysis published in The Archives of General Psychiatry found that on average, the older antipsychotics—such as Thorazine, mocked in the novel One Flew Over the Cuckoo's Nest for the fixed, glassy stares it produced in those who took it—worked as well as the new generation, and at a fraction of the cost. Then there was more bad news, which washed like a tidal wave across the mental health world in the late 1990s, as if the facts had somehow been hidden from view. These new antipsychotics caused patients to gain tremendous amounts of weight. On average, people put on 10 pounds in their first 10 weeks on Clozaril. They could gain a hundred pounds in a year. It made them feel awful. I remember a round young woman whose eyes suddenly filled with tears as she told me she once had been slender.
The weight not only depressed people. It killed them. People with schizophrenia die at a rate far higher than that of the general population, and most of that increase is not due to suicide. In a now famous study of patients on Clozaril, more than a third developed diabetes in the first five years of use alone.
The second reason the tide turned against the simple biomedical model is that the search for a genetic explanation fell apart. Genes are clearly involved in schizophrenia. The child of someone with schizophrenia has a tenfold increase in the risk of developing the disorder; the identical twin of someone with schizophrenia has a one-in-two chance of falling ill. By contrast, the risk that a child of someone with Huntington's chorea—a terrible convulsive disorder caused by a single inherited gene—will go on to develop the disease goes up by a factor of 10,000. If you inherit the gene, you will die of the disease.
Schizophrenia doesn't work like that. The effort to narrow the number of genes that may play a role has been daunting. A leading researcher in the field, Ridha Joober, has argued that there are so many genes involved, and the effects of any one gene are so small, that the serious scientist working in the field should devote his or her time solely to identifying genes that can be shown not to be relevant. The number of implicated genes is so great that Schizophrenia Forum, an excellent Web site devoted to organizing the scientific research on the disorder—the subject of 50,000 published articles in the last two decades—features what Joober has called a "gene of the week" section. Another scientist, Robin Murray, one of the most prominent schizophrenia researchers in Europe, has pointed out that you can now track the scientific status of a gene the way you follow the performance of a sports team. He said he likes to go online to the Schizophrenia Forum to see how his favorite genes are faring.
The third reason for the pushback against the biomedical approach is that a cadre of psychiatric epidemiologists and anthropologists has made clear that culture really matters. In the early days of the biomedical revolution, when schizophrenia epitomized the pure brain disorder, the illness was said to appear at the same rate around the globe, as if true brain disease respected no social boundaries and was found in all nations, classes, and races in equal measure. This piece of dogma was repeated with remarkable confidence from textbook to textbook, driven by the fervent anti-psychoanalytic insistence that the mother was not to blame. No one should ever have believed it. As the epidemiologist John McGrath dryly remarked, "While the notion that schizophrenia respects human rights is vaguely ennobling, it is also frankly bizarre." In recent years, epidemiologists have been able to demonstrate that while schizophrenia is rare everywhere, it is much more common in some settings than in others, and in some societies the disorder seems more severe and unyielding. Moreover, when you look at the differences, it is hard not to draw the conclusion that there is something deeply social at work behind them.
Schizophrenia has a more benign course and outcome in the developing world. The best data come from India. In the study that established the difference, researchers looking at people two years after they first showed up at a hospital for care found that they scored significantly better on most outcome measures than a comparable group in the West. They had fewer symptoms, took less medication, and were more likely to be employed and married. The results were dissected, reanalyzed, then replicated—not in a tranquil Hindu village, but in the chaotic urban tangle of modern Chennai. No one really knows why Indian patients did so well, but increasingly, psychiatric scientists are willing to attribute the better outcomes to social factors. For one thing, families are far more involved in the ill person's care in India. They come to all the appointments, manage the medications, and allow the patients to live with them indefinitely. Compared to Europeans and Americans, they yell at the patients less.
Indian families also don't treat people with schizophrenia as if they have a soul-destroying illness. As an anthropology graduate student, Amy Sousa spent more than a year in northern India, sitting with doctors as they treated patients who came with their families into a dingy hospital where overworked psychiatrists can routinely have 10 appointments an hour. Many of the doctors didn't mention a diagnosis. Many of the families didn't ask. There was a good deal of deception—wives grinding medication into the flour for the daily chapattis they made for their husbands, doctors explaining to patients that they were completely well but should take strengthening pills to protect themselves from the ravages of their youth. As a result, none of the patients thought of themselves as having a career-ending illness, and every one of them expected to get better. And at least compared to patients in the West, they generally did.
The most remarkable recent epidemiologic finding relates to migrants: Some fall ill with schizophrenia not only at higher rates than the compatriots they leave behind, but at higher rates than the natives of the countries to which they have come. Dark-skinned migrants to Europe, mostly from the Caribbean or sub-Saharan Africa, are at risk of developing schizophrenia at rates as much as 10 times higher than those of white Europeans. This is a dramatic increase, and it has been shown by so many studies conducted with such methodological care that it cannot be dismissed as diagnostic racism, as if white clinicians confronted with angry black men simply called them "schizophrenic" (even though this sometimes happens). Nor does it seem that biology alone can explain the increased risk, although serious research is now being done to test the hypothesis that vitamin D deficiency plays a role.
Some observers think that the epidemiologic finding is a stark story about the way racism gets under the skin and drives people mad. It is probably more complicated than that. Another young anthropologist, Johanne Eliacin, spent two years doing fieldwork among African-Caribbean migrants living in London. Eliacin saw racism, and she felt viscerally her subjects' stinging sense of being unwanted and out of place. But she also saw a social world shot through with hostility and anger, in which people were isolated and often intensely lonely. The African-Caribbean people in Tottenham spoke of there being no community in the community. They held up schizophrenia as the symbol of what had gone wrong. Yes, racism lay at the root of the problem, but the tangible distress was the sense of being hopelessly trapped.
Epidemiologists have now homed in on a series of factors that increase the risk of developing schizophrenia, including being migrant, being male, living in an urban environment, and being born poor. One of the more disconcerting findings is that if you have dark skin, your risk of falling victim to schizophrenia increases as your neighborhood whitens. Your level of risk also rises if you were beaten, taunted, bullied, sexually abused, or neglected when you were a child. In fact, how badly a child is treated may predict how severe the case of an adult person with schizophrenia becomes—and particularly, whether the adult hears harsh, hallucinatory voices that comment or command. The psychiatrist Jean-Paul Selten was the first to call this collection of risk factors an experience of "social defeat," a term commonly used to describe the actual physical besting of one animal by another. Selten argued that the chronic sense of feeling beaten down by other people could activate someone's underlying genetic vulnerability to schizophrenia.
All this—the disenchantment with the new-generation antipsychotics, the failure to find a clear genetic cause, the discovery of social causation in schizophrenia, the increasing dismay at the comparatively poor outcomes from treatment in our own health care system—has produced a backlash against the simple biomedical approach. Increasingly, treatment for schizophrenia presumes that something social is involved in its cause and ought to be involved in its cure.
You can see this backlash most clearly in the United States in the Recovery Movement, which explicitly embraces the idea that the very way you imagine an illness will affect the way you experience it—an idea that seems, well, almost psychoanalytic.  As the movement's manifesto defined it, "recovery is a process, a way of life, an attitude, and a way of approaching the day's challenges." One of the most influential patient-driven initiatives in decades, the Recovery Movement received a federal imprimatur of sorts in 2003, when the Bush administration issued a mandate promoting "recovery-oriented services." Treatment providers paid by Medicare and Medicaid were told that schizophrenia would no longer be understood as an illness with a chronic and debilitating course, a death sentence for the mind. Instead, patients and mental health professionals were instructed to believe that people with schizophrenia could live as effective members of a community, able to work and to be valued. The expectation of permanent impairment was to be replaced with hope.
In practice, the ascendency of the Recovery Movement has meant that many programs and day treatment centers once run by nonpatients have been turned over to clients (so as to empower them), and that the staff allows clients to make more decisions about how to spend their money and what to do with their time. These changes have not come without bumps. Clients have not always made good choices; the staff has sometimes been reluctant to allow clients a free hand. The anthropologist Neely Myers, who spent months doing ethnographic fieldwork in client-run recovery services in Chicago, points out that this very American expectation that everyone will be an independent, productive citizen sets a high bar for people struggling with serious psychosis.
But the point is that the very idea of the recovery intervention upends the bio-bio-bio vision. Clients are encouraged to take their medication, of course, but the real therapeutic change is thought to come through something social: something people learn to do, say, and believe.
That is also true for other innovative strategies to treat schizophrenia. In Europe, the Hearing Voices network teaches people who hear distressing voices to negotiate with them. They are taught to treat the voices as if they were people—to talk with them, and make deals with them, as if the voices had the ability to act and decide on their own. This runs completely counter to the simple biomedical model of psychiatric illness, which presumes that voices are meaningless symptoms, ephemeral sequelae of lesions in the brain. Standard psychiatric practice has been to discount the voices, or to ignore them, on the grounds that doing so reminds patients that they are not real and that their commands should not be followed. One might think of the standard approach as calling a spade a spade. When voices are imagined as agents, however, they are imagined as having the ability to choose to stop talking. Members of the Hearing Voices movement report that this is what they do. In 2009, at a gathering in the Dutch city of Maastricht, person after person diagnosed with schizophrenia stood up to tell the story of  learning to talk with the voices—and how the voices had then agreed to stop.
This lesson—that the world as imagined can change the world as it is—lies behind the intervention that helped Susan so much. In care as usual, people diagnosed with schizophrenia are regarded as severely disabled and thus as appropriate recipients of supported housing and other benefits. People are required to get their diagnosis to justify their placements, sometimes being asked to collect an actual piece of paper from one office and turn it in at another. Many people with schizophrenia cycle through long periods of homelessness. Few of them like it. You would think that they would line up to be housed. But they dislike the diagnosis even more than they dislike being out on the street, because the idea of being "crazy" seems even more horrifying to them than it does to those of us who have roofs over our heads.  For many months, I spent time with homeless women on the streets of Chicago who clearly met criteria for schizophrenia. They talked about going crazy as something the street did to people who were too weak to handle the life, and they thought of being crazy as having a broken brain that could never be fixed. They often refused to accept housing that required a psychiatric diagnosis, or they would take it for a while and then leave. They lived lives of restless nomadism, intermittently being hospitalized or jailed by the police when their behavior got out of hand, then being released to supported housing, then, in turn, finding their way back to the bleak streets.
The new kind of intervention simply gives people housing without asking them to admit to a diagnosis. Programs like the one that helped Susan are supported by federal funding set aside for people with serious mental illness, but the benefit is not described that way to clients. Though Susan knows that she has subsidized housing, she thinks she got it because she entered a program at a shelter to help her get off crack. Those who created programs like the one Susan is in believe that the social setting in which a patient lives and imagines herself have as much to do with her treatment as any medication. In general, the data prove that they are right. People are more likely to accept housing when offered it in these programs than in care-as-usual settings, and after they are housed their symptoms lessen—whether or not they are taking medications.
The pushback against purely biomedical treatment is also occurring with other psychiatric illnesses. The confident hope that new-generation antidepressants would cure depression—those new miracle drugs such as Prozac and Zoloft that made people thinner, sharper, and "better than well," in psychiatrist Peter D. Kramer's apt phrase—dimmed when the public learned that teenagers committed suicide more often while taking them. No simple genetic cause for depression has emerged. There is clearly social causation in the disorder, and it too looks different in different cultures, shaped by particular causes, social settings, and methods of treatment. In the standard psychiatric textbook, Harold I. Kaplan and Benjamin J. Sadock's Comprehensive Textbook of Psychiatry, depression is now mapped out with a host of factors, some of them biological, many of them not, and the recommended treatment includes psychotherapy.
In part, this backlash against the bio-bio-bio model reflects the sophisticated insight of an emerging understanding of the body—epigenetics—in which genes themselves respond to an individual's social context. There is even an effort within psychiatry to abandon diagnosis altogether and instead to treat dimensions of specific behaviors, such as fear or working memory. Realistically, this project—the Research Domain Criteria—won't dismantle the diagnostic edifice. Too much of the structure of reimbursement and care depends upon the fiction of clear-cut, biologically distinct diseases. Still, the scientists are trying.
The pushback is also a return to an older, wiser understanding of mind and body. In his Second Discourse(1754), Jean Jacques Rousseau describes human beings as made up out of each other through their interactions, their shared language, their intense responsiveness. "The social man, always outside of himself, knows only how to live in the opinions of others; and it is, so to speak, from their judgment alone that he draws the sentiment of his own existence." We are deeply social creatures. Our bodies constrain us, but our social interactions make us who we are. The new more socially complex approach to human suffering simply takes that fact seriously again.
http://wilsonquarterly.com/printarticle.cfm?aid=2196