Friday, January 15, 2010

Shooting Up to Slow Aging -

NEARLY EVERY SUNDAY morning — Easter and Mother's Day included — John Bellizzi says goodbye to his wife, Francesca, grabs an equipment bag and slides into the front seat of his black BMW. He drives to a high-school soccer field about 10 miles from his home in the New York City suburb of Rye.

Bellizzi, who is 51, is a member of the Old Timers Soccer Club, a band of stubborn, aging athletes who refuse to fall under the spell of golf. Technically, these are just pickup games, but they have been happening weekly since the early 1980s. The players go to the trouble of hiring a referee and battle full tilt (think slide tackles and heels-over-head bicycle kicks) for an hour and a half. Many of them were high-school and collegiate stars, decades ago. "One guy had a hip replacement," Bellizzi, a former soccer captain at Queens College, says. "He was out for a year, then he came back."

Advil, hot tubs and surgery keep most of the Old Timers going, but Bellizzi has ventured further. Two summers ago he became a patient of Dr. Florence Comite, a Manhattan endocrinologist affiliated with Cenegenics Medical Institute. Cenegenics, a privately held company based in Las Vegas, claims to have 10,000 patients and annual revenue of $50 million, making it the country's foremost purveyor of so-called age-management medicine.

Comite's relationship to Bellizzi is like that of an ace mechanic to a classic car. Her job is to keep him finely tuned despite worn parts. "I consider what I do aggressive prevention, the basis of which is metabolism modulation," Comite says. "Twenty years from now, this will be the standard of care."

Bellizzi is a vice president of business development at Thomson Reuters. Every three months a Cenegenics contractor comes to his office in Stamford, Conn., and draws 10 vials of blood. Comite receives a lab report that isolates scores of variables on those samples, among them glucose and cortisol, a stress hormone produced by the adrenal glands. The readings, in part, tell her if any metabolism tweaking is in order.

Under Comite's guidance — and at an annual cost of about $10,000, most of it not covered by insurance — Bellizzi has gobbled vitamins and prescription-strength Omega-3 fatty acids. He follows a low-glycemic diet, lifts weights and jogs, all of which is familiar-enough health-and-fitness fare. Comite asserts, however, that "lifestyle alone isn't enough" to counter the corrosive effects of aging. Therefore, twice a week Bellizzi grabs a pinch of abdominal skin and injects himself with human chorionic gondatropin, or H.C.G., a hormone distilled from the urine of pregnant women.

More ...

Doctor and Patient - Looking Beyond MCATs to Pick Future Doctors -

Not long ago, a friend confessed that her son, who spends much of his free time volunteering at a children's hospital and who is applying to medical school, has been particularly anxious about his future. "His test scores are just O.K.," my friend said, the despair in her voice nearly palpable. "I know he'd be a great doctor, but who he is doesn't seem to matter to medical schools as much as how he does on tests."

Her comment brought me back to the many anxious conversations I had had with friends when we were applying to medical school. Over and over again, we asked ourselves: Do we really need to be good at multiple-choice exams in order to be a good doctor?

We were referring of course to not just any exam, but to the Big One — the Medical College Admission Test, or MCAT, the standardized cognitive assessment exam that measures mastery of the premedical curriculum. Back then, as now, American medical school admissions committees required every applicant to sit for the MCAT.

While medical schools have since taken pains to assure applicants that recommendation letters and essays also weigh heavily, many candidates continue to believe, erroneously or not, that the MCAT can make or break one's chances. Competition to get into medical school remains fierce, with over 42,000 highly qualified individuals vying for just a few more than 18,000 slots at medical schools across the country.

With those kinds of statistics and no reliable standardized way to evaluate personality, it is inevitable that the MCAT will have a crucial role in medical school admissions. But does that guarantee that the applicants admitted are also destined to become the best doctors?

Maybe not.

According to a recent study in The Journal of Applied Psychology, there is another kind of exam that may be more predictive of how successful students will be in medicine: personality testing.

For nearly a decade, three industrial and organizational psychologists from the United States and Europe followed more than 600 medical students in Belgium, where premedical and medical school curriculums are combined into a single seven-year program. As in the United States, the early portion of their education is focused on acquiring basic science knowledge through lectures and classroom work; the latter part is devoted to mastering clinical knowledge and spending time with patients.

At the start of the study, the researchers administered a standardized personality test and assessed each student for five different dimensions of personality — extraversion, neuroticism, openness, agreeableness and conscientiousness. They then followed the students through their schooling, taking note of the students' grades, performance and attrition rates.

The investigators found that the results of the personality test had a striking correlation with the students' performance. Neuroticism, or an individual's likelihood of becoming emotionally upset, was a constant predictor of a student's poor academic performance and even attrition. Being conscientious, on the other hand, was a particularly important predictor of success throughout medical school. And the importance of openness and agreeableness increased over time, though neither did as significantly as extraversion. Extraverts invariably struggled early on but ended up excelling as their training entailed less time in the classroom and more time with patients.

"The noncognitive, personality domain is an untapped area for medical school admissions," said Deniz S. Ones, a professor of psychology at the University of Minnesota and one of the authors of the study. "We typically address it in a more haphazard way than we do cognitive ability, relying on recommendations, essays and either structured or unstructured interviews. We need to close the loop on all of this."

Some schools have tried to use a quantitative rating system to evaluate applicant essays and letters of recommendation, but the results remain inconsistent. "Even with these attempts to make the process more sophisticated, there is no standardization," Dr. Ones said. "Some references might emphasize conscientiousness, and some interviewers might focus on extraversion. That nonstandardization has costs in terms of making wrong decisions based on personality characteristics."

By using standardized assessments of personality, a medical school admissions committee can get a better sense of how a candidate stands relative to others. "If I know someone is not just stress-prone, but stress-prone at the 95th percentile rather than the 65th," Dr. Ones said, "I would have to ask myself if that person could handle the stress of medicine."

While standardized tests like the MCAT and the SAT have been criticized for putting certain population groups at a disadvantage, the particular personality test used in this study has been shown to work consistently across different cultures and backgrounds. "This test shows virtually none or very tiny differences between different ethnic or minority groups," Dr. Ones noted. Because of this reliability, the test is a potentially invaluable adjunct to more traditional knowledge-based testing. "It could work as an additional predictive tool in the system," she said.

One perennial question that personality testing could help to answer is whether hard work can make up for differences in cognitive ability. "Some of our data says yes," Dr. Ones said. "If someone is at the 15th percentile of the cognitive test but at the 95th percentile of conscientiousness, chances are that the student is going to make it." That student may even eventually outperform peers who have higher cognitive test scores but who are less conscientious or more neurotic and stress-prone.

But these standardized tests, personality or cognitive, can be useful only after medical schools, and the public they serve, decide what characteristics are most important for the next generation of doctors. "If a medical school is all about graduating great researchers, then I would tell them not to weigh the results of the personality test that heavily," Dr. Ones said. "But if you want doctors who are practitioners, valued members in terms of serving greater public, then you have to pay close attention to these results."

She added: "When you ask your friends, they will describe you in terms of your personality. Rarely will you get a description of your cognitive ability. Personality is what makes us who we are."

Tuesday, January 12, 2010

Doctors Often Delay Conversations About Death With Terminal Patients -

It's a conversation that most people dread, doctors and patients alike. The cancer is terminal, time is short, and tough decisions loom — about accepting treatment or rejecting it, and choosing where and how to die.

When is the right time — if there is one — to bring up these painful issues with someone who is terminally ill?

Guidelines for doctors say the discussion should begin when a patient has a year or less to live. That way, patients and their families can plan whether they want to do everything possible to stay alive, or to avoid respirators, resuscitation, additional chemotherapy and the web of tubes, needles, pumps and other machines that often accompany death in the hospital.

But many doctors, especially older ones and specialists, say they would postpone those conversations, according to a study published onlineMonday in the journal Cancer.

It's not entirely clear whether these doctors are remiss for not speaking up — or whether the guidelines are unrealistic. Advice that sounds good on paper may be no match for the emotions on both sides when it comes to facing patients and their families and admitting that it will soon be over, that all medicine can offer is a bit of comfort while the patient waits to die.

Dr. Nancy L. Keating, the first author of the study and an associate professor of medicine and health care policy at Harvard, said not much was known about how, when or even if doctors were having these difficult talks with dying patients. But she said that her research team suspected that communication was falling short, because studies have shown that even though most people want to die at home, most wind up dying in the hospital.

The researchers surveyed 4,074 doctors who took care of cancer patients, instructing them to imagine one who had only four to six months left, but was still feeling well. Then the doctors were asked when they would discuss the prognosis, whether the patient wanted resuscitation or hospice care, and where he or she wanted to die.

The results came as a surprise: the doctors were even more reluctant to ask certain questions than the researchers had expected. Although 65 percent said they would talk about the prognosis "now," far fewer would discuss the other issues at the same time: resuscitation, 44 percent; hospice, 26 percent; site of death, 21 percent. Instead, most of the doctors said they would rather wait until the patients felt worse or there were no more cancer treatments to offer.

They were not asked for their reasoning, but Dr. Keating offered several possibilities. One is that doctors may disagree with the guidelines, which are based on expert opinion rather than data.

"Or they may not be comfortable discussing it," she said. "These conversations are time-consuming and difficult. Some doctors may feel patients will lose hope. It's easier to say, 'Let's try another round of chemotherapy,' instead of having a heart-to-heart discussion." Training may also be a factor, Dr. Keating said. Medical schools spend more time on end-of-life issues than they did in the past, and the greater willingness of younger doctors to broach the subject may reflect that change.

Dr. Daniel Laheru, an associate professor at the Kimmel Cancer Center at Johns Hopkins and a specialist in pancreatic and colorectal cancers, said he was not surprised by the study.

"The natural tendency is not to provide more information about this than you have to," he said. "It's such an uncomfortable conversation and it takes such a long time to do it right."

He added, "People come to us with hope, and if you kind of yank that away from them right away, it's very unsettling."

A terminal diagnosis plus the grim details of "do not resuscitate" orders and hospice care may be too much for a patient to hear in one day. Dr. Laheru said he tried to prepare patients on their first visit for the idea that during later visits they would discuss different possible outcomes.

"They don't always hear that part," he said.

Dr. John Boockvar, a neurosurgeon at NewYork-Presbyterian/Weill Cornell Medical Center who treats many patients with malignant brain tumors, said he favored postponing such discussions until the end was drawing close. During his own late father's illness with leukemia, he said, his family was upset by an oncologist who brought up end-of-life issues early.

"As a patient and a family member, I don't know if I would have wanted to hear a doctor say, 'In 18 months we'll be dealing with hospice or end-of-life discussions — do you want to have that discussion now?' " Dr. Boockvar said. "I don't know what the emotional benefit is to the family. I don't think it's been studied."

As a doctor treating patients who are terminally ill, he went on, he did not hesitate to discuss end-of-life issues. But he said, "As the time approaches, there's usually ample time."

But Dr. David R. Hilden, an internist at Hennepin County Medical Center in Minneapolis and an assistant professor of medicine at the University of Minnesota, is not so sure.

"I think many of us wait until there's just a few weeks left and then you have no choice," he said. "It's going to happen in a week or two, and they're in the hospital and they're on their last legs. The time to talk is much earlier."

Without planning, Dr. Hilden said, dying patients may wind up in exactly the situation they dreaded most, tethered to machines in a hospital instead of being kept comfortable at home in their own beds.

"This last week, I had a patient with prostate cancer and end-stage heart disease, who probably has less than a year," he said. "I talked to him and his wife. 'How do you want your remaining days to be? How much do you want us to do?' He and his wife were very receptive. Many patients appreciate it. We had a good conversation. It's easier when the patient is older and it's not entirely unexpected. He's 86."

The patient said he did not want tubes or machines, but just wanted to be comfortable for his last few months.

"They were at peace with it," Dr. Hilden said, adding that many patients who get aggressive treatment for advanced cancer might in retrospect have made a different choice.

"They might say: 'After that last three or four months of radiation and chemotherapy, I'm sick, I'm nauseated, my hair fell out and it didn't extend my life. I might not have done it if I'd known, if I had had the chance.' "

Dr. Keating agreed, saying she thought that often when terminally ill patients choose to continue chemotherapy, they don't understand its limits.

"They say, 'I want to do everything,' and they mean 'everything to cure me,' " she said. "They don't understand it's not curative."

Despite the difficulties, she went on, doctors should level with their patients.

"When you know someone's going to die of their disease, it's only fair to the patients to help them understand that," Dr. Keating said. "But these conversations are very challenging. Figuring out how to do it well — it's so tricky. It's definitely not something everybody believes in."

Sunday, January 10, 2010

The Americanization of Mental Illness -

Americans, particularly if they are of a certain leftward-leaning, college-educated type, worry about our country's blunders into other cultures. In some circles, it is easy to make friends with a rousing rant about the McDonald's near Tiananmen Square, the Nike factory in Malaysia or the latest blowback from our political or military interventions abroad. For all our self-recrimination, however, we may have yet to face one of the most remarkable effects of American-led globalization. We have for many years been busily engaged in a grand project of Americanizing the world's understanding of mental health and illness. We may indeed be far along in homogenizing the way the world goes mad.

This unnerving possibility springs from recent research by a loose group of anthropologists and cross-cultural psychiatrists. Swimming against the biomedical currents of the time, they have argued that mental illnesses are not discrete entities like the polio virus with their own natural histories. These researchers have amassed an impressive body of evidence suggesting that mental illnesses have never been the same the world over (either in prevalence or in form) but are inevitably sparked and shaped by the ethos of particular times and places. In some Southeast Asian cultures, men have been known to experience what is called amok, an episode of murderous rage followed by amnesia; men in the region also suffer from koro, which is characterized by the debilitating certainty that their genitals are retracting into their bodies. Across the fertile crescent of the Middle East there is zar, a condition related to spirit-possession beliefs that brings forth dissociative episodes of laughing, shouting and singing.

The diversity that can be found across cultures can be seen across time as well. In his book "Mad Travelers," the philosopher Ian Hacking documents the fleeting appearance in the 1890s of a fugue state in which European men would walk in a trance for hundreds of miles with no knowledge of their identities. The hysterical-leg paralysis that afflicted thousands of middle-class women in the late 19th century not only gives us a visceral understanding of the restrictions set on women's social roles at the time but can also be seen from this distance as a social role itself — the troubled unconscious minds of a certain class of women speaking the idiom of distress of their time.

"We might think of the culture as possessing a 'symptom repertoire' — a range of physical symptoms available to the unconscious mind for the physical expression of psychological conflict," Edward Shorter, a medical historian at the University of Toronto, wrote in his book "Paralysis: The Rise and Fall of a 'Hysterical' Symptom." "In some epochs, convulsions, the sudden inability to speak or terrible leg pain may loom prominently in the repertoire. In other epochs patients may draw chiefly upon such symptoms as abdominal pain, false estimates of body weight and enervating weakness as metaphors for conveying psychic stress."

In any given era, those who minister to the mentally ill — doctors or shamans or priests — inadvertently help to select which symptoms will be recognized as legitimate. Because the troubled mind has been influenced by healers of diverse religious and scientific persuasions, the forms of madness from one place and time often look remarkably different from the forms of madness in another.

That is until recently.

For more than a generation now, we in the West have aggressively spread our modern knowledge of mental illness around the world. We have done this in the name of science, believing that our approaches reveal the biological basis of psychic suffering and dispel prescientific myths and harmful stigma. There is now good evidence to suggest that in the process of teaching the rest of the world to think like us, we've been exporting our Western "symptom repertoire" as well. That is, we've been changing not only the treatments but also the expression of mental illness in other cultures. Indeed, a handful of mental-health disorders — depression, post-traumatic stress disorder and anorexia among them — now appear to be spreading across cultures with the speed of contagious diseases. These symptom clusters are becoming the lingua franca of human suffering, replacing indigenous forms of mental illness.

More ...

The Wrong Story About Depression -

"Startling results," promised the CNN teasers, building anticipation for a segment on this week's big mental health news: a study led by researchers at the University of Pennsylvania indicating that the antidepressants Paxil and imipramine work no better than placebos ("than sugar pills," said CNN) for people with mild to moderate depression.

Happy pills don't work, the story quickly became, even though, boiled down to that headline, it was neither startling nor particularly true.

It sounded true. After all, any number of experts have argued that antidepressants — and selective serotonin reuptake inhibitors like Paxil in particular — are overhyped and oversold. And after years of hearing about shady practices within the pharmaceutical industry, and of psychiatrists who enrich themselves in the shadows by helping the industry market its drugs, we are primed to believe stories of psychiatric trickery.

Yet in all the excitement about "startling" news and "sugar pills," a more nuanced and truer story about mental health care in America was all but lost.

That story begins to take shape when you consider what the new study actually said: Antidepressants do work for very severely depressed people, as well as for those whose mild depression is chronic. However, the researchers found, the pills don't work for people who aren't really depressed — people with short-term, minor depression whose problems tend to get better on their own. For many of them, it's often been observed, merely participating in a drug trial (with its accompanying conversation, education and emphasis on self-care) can be anti-depressant enough.

None of this comes as news to people who have been prescribing or studying antidepressants over the past 20 years. Neither is it all that likely to change the practice of treating depression — at least as it's carried out by responsible doctors.

After all, people who are depressed for the first time, or have been depressed for only a short time, or are upset after a personal setback, aren't considered ideal candidates for immediate drug therapy. And, contrary to popular belief, there's no evidence that most psychiatrists regularly prescribe pills straight off to people who can get better by reading about depression, exercising or doing nothing. What numbers do exist, said Peter Kramer, who has written extensively on antidepressant use in books like "Listening to Prozac," indicate that relatively few people with minimal depression leave psychiatrists' offices with a prescription.

That people have come to believe otherwise may be in part because most patients with depression are treated by general practitioners, not psychiatrists. Studies have shown that these primary care doctors don't strenuously enough screen their patients for depression before prescribing drugs, or closely monitor their care afterward.

And here the truer story about mental health care in America begins to unfold. The trouble is not that the drugs don't work; it's that the care is not very good.

Inadequate treatment by nonspecialists is only a piece of the problem. In fact, most Americans with depression, rather than being overmedicated, are undertreated or not treated at all. This might have been big news this week, too, had anyone noticed another academic study, a survey of nearly 16,000 people published this month in The Archives of General Psychiatry, which looked more broadly at the picture of depression in America. The survey found that those who did get care were given psychotherapy more often than drugs. That finding might give heart to those who would prefer to see more alternatives to psychiatric drugs — if it weren't for the fact that so much psychotherapy is so bad.

In 2008, a team of psychologists brought this point home in blunt terms in the journal Psychological Science in the Public Interest. "Despite the availability of highly effective interventions," they wrote, "relatively few psychologists learn or practice these interventions."

This is the big picture of mental health care in America: not perfectly healthy people popping pills for no reason, but people with real illnesses lacking access to care; facing barriers like ignorance, stigma and high prices; or finding care that is ineffective.

It is a societywide concern that a co-author of the new antidepressants study readily acknowledges. "What we reported on was a very small piece of a very large problem," Robert J. DeRubeis, a professor of psychology at the University of Pennsylvania, told me. "Those kinds of things are not being sorted out in this country because there's no system. Nobody's asking these questions."

With health care reform almost certainly on the horizon, perhaps now we can hope they will start asking.

Judith Warner, a former columnist at, is the author of the forthcoming "We've Got Issues: Children and Parents in the Age of Medication."