Saturday, August 6, 2011

Shortchanging Cancer Patients - Ezekiel J. Emanuel -

Right now cancer care is being rationed in the United States.

Probably to their great disappointment, President Obama's critics cannot blame this rationing on death panels or health care reform. Rather, it is caused by a severe shortage of important cancer drugs.

Of the 34 generic cancer drugs on the market, as of this month, 14 were in short supply. They include drugs that are the mainstay of treatment regimens used to cure leukemia, lymphoma and testicular cancer. As Dr. Michael Link, the president of the American Society of Clinical Oncology, recently told me, "If you are a pediatric oncologist, you know how to cure 70 to 80 percent of patients. But without these drugs you are out of business."

This shortage is even inhibiting research studies that can lead to higher cure rates: enrollment of patients in many clinical trials has been delayed or stopped because the drugs that are in short supply make up the standard regimens to which new treatments are added or compared.

The sad fact is, there are plenty of newer brand-name cancer drugs that do not cure anyone, but just extend life for a few months, at costs of up to $90,000 per patient. Only the older but curative cancer drugs — drugs that can cost as little as $3 per dose — have become unavailable. Most of these drugs have no substitutes, but, crazy as it seems, in some cases these shortages are forcing doctors to use brand-name drugs at more than 100 times the cost.

Only about 10 percent of the shortages can be attributed to a lack of raw materials and essential ingredients to manufacture the drugs. Most shortages appear instead to be the consequence of corporate decisions to cease production, or interruptions in production caused by money or quality problems, which manufacturers do not appear to be in a rush to fix.

If the laws of supply and demand were working properly, a drug shortage would cause a price rise that would induce other manufacturers to fill the gap. But such laws do not really apply to cancer drugs.

The underlying reason for this is that cancer patients do not buy chemotherapy drugs from their local pharmacies the way they buy asthma inhalers or insulin. Instead, it is their oncologists who buy the drugs, administer them and then bill Medicare and insurance companies for the costs.

Historically, this "buy and bill" system was quite lucrative; drug companies charged Medicare and insurance companies inflated, essentially made-up "average wholesale prices." The Medicare Prescription Drug, Improvement and Modernization Act of 2003, signed by President George W. Bush, put an end to this arrangement. It required Medicare to pay the physicians who prescribed the drugs based on a drug's actual average selling price, plus 6 percent for handling. And indirectly — because of the time it takes drug companies to compile actual sales data and the government to revise the average selling price — it restricted the price from increasing by more than 6 percent every six months.

The act had an unintended consequence. In the first two or three years after a cancer drug goes generic, its price can drop by as much as 90 percent as manufacturers compete for market share. But if a shortage develops, the drug's price should be able to increase again to attract more manufacturers. Because the 2003 act effectively limits drug price increases, it prevents this from happening. The low profit margins mean that manufacturers face a hard choice: lose money producing a lifesaving drug or switch limited production capacity to a more lucrative drug.

The result is clear: in 2004 there were 58 new drug shortages, but by 2010 the number had steadily increased to 211. (These numbers include noncancer drugs as well. )

Unfortunately, there is no quick fix, because all solutions require legislation. A bill introduced in February by Senator Amy Klobuchar, Democrat of Minnesota, and Senator Bob Casey, Democrat of Pennsylvania, would require generic manufacturers to notify the Food and Drug Administration if they expected a supply problem or planned to stop manufacturing a drug. But the F.D.A. isn't able to force manufacturers to produce a drug, and learning about impending shortages with little authority to alleviate them is of limited benefit. Indeed, early warning could exacerbate the problem: the moment oncologists or cancer centers hear there is going to be a shortage of a critical drug, their response could well be to start hoarding.

You don't have to be a cynical capitalist to see that the long-term solution is to make the production of generic cancer drugs more profitable. Most of Europe, where brand-name drugs are cheaper than in the United States, while generics are slightly more expensive, has no shortage of these cancer drugs.

One solution would be to amend the 2003 act to increase the amount Medicare pays for generic cancer drugs to the average selling price plus, say, 30 percent, after the drugs have been generic for three years. This would encourage the initial rapid price drop that makes generics affordable, but would allow for an increase in price and profits to attract more generic producers and the fixing of any manufacturing problems that subsequently arose.

Increasing the price for generic oncology drugs would have a negligible impact on overall health care costs. Total spending on generic injectable cancer drugs was $400 million last year — just 2 percent of cancer drug costs, and less than 0.5 percent of the total cost of cancer care. If we are worried about costs, we could follow Europe and pay for the higher prices by lowering what Medicare pays for the brand-name drugs that extend life by only a few months.

A more radical approach would be to take Medicare out of the generic cancer drug business entirely. Once a drug becomes generic, Medicare should stop paying, and it should be covered by a private pharmacy plan. That way prices can better reflect the market, and market incentives can work to prevent shortages.

Scare-mongering about death panels and health care reform has diverted attention from real issues in our health care system. Shortages in curative cancer treatments are completely unacceptable. We need to stop the political demagoguery and fix the real rationing problem.

Ezekiel J. Emanuel is an oncologist and former White House adviser who will be a professor of medical ethics and health policy at the University of Pennsylvania beginning in September. He will be contributing regularly to Op-Ed.

The Phantom Menace of Sleep Deprived Doctors -

Last month something extraordinary happened at teaching hospitals around the country: Young interns worked for 16 hours straight — and then they went home to sleep. After decades of debate and over the opposition of nearly every major medical organization and 79 percent of residency-program directors, new rules went into effect that abolished 30-hour overnight shifts for first-year residents. Sanity, it seemed to people who had long been fighting for a change, had finally won out.

Of course, the overworked, sleep-deprived doctor valiantly saving lives is an archetype that is deeply rooted in the culture of physician training, not to mention television hospital dramas. William Halsted, the first chief of surgery at Johns Hopkins in the 1890s and a founder of modern medical training, required his residents to be on call 362 days a year (only later was it revealed that Halsted fueled his manic work ethic with cocaine), and for the next 100 years the attitude of the medical establishment was more or less the same. Doctors, influenced by their own residency experiences, often see hospital hazing as the most effective way to learn the practice of medicine.

But over the last three decades, a counterpoint archetype has emerged: the sleep-deprived, judgment-impaired young doctor in training who commits a serious medical error. "Doctors think they're a special class and not subject to normal limitations of physiology," says Dr. Christopher Landrigan, an associate professor at Harvard Medical School and one of the most influential voices calling for work-hour reform. A large body of research on the hazards of fatigue ultimately led to the new rule on overnight shifts by the Accreditation Council for Graduate Medical Education, the independent nonprofit group that regulates medical-residency programs.

More than anything else, it was the death of 18-year-old Libby Zion 27 years ago that served as a catalyst for reform. Zion was jerking uncontrollably and had a fever of 103 degrees when she was admitted to New York Hospital on March 4, 1984. After she was admitted, Zion was given Tylenol and evaluated by a resident and an intern. They prescribed Demerol, a sedative. But her thrashing continued, and the intern on duty, who was just eight months out of medical school, injected another sedative, Haldol, and restrained her to the bed. Shortly after 6 a.m., the teenager's fever shot up to 108 degrees and, despite efforts to cool her, she went into cardiac arrest. Seven hours after she was admitted, Libby Zion was declared dead.

Libby's father was Sidney Zion, a columnist for The Daily News. When Zion learned that his daughter's doctor had by then been on duty for almost 24 hours and that young doctors were routinely awake for more than 36 hours, he sued the hospital and doctors and publicized the conditions he was convinced had led to her death. Stories about overtired interns appeared in major newspapers and on "60 Minutes."

Reforms followed, albeit slowly. In 1989, New York State cut the number of hours that doctors in training could work, setting a limit of 80 hours per week. And in 2003, the accreditation councilimposed the 80-hour limit on all U.S. training programs, prohibited trainees from direct patient care after 24 hours of continuous duty and mandated at least one day off per week.

To Landrigan, this was tremendous, if incomplete, progress. He ran a yearlong study during which a team of interns at Brigham and Women's Hospital worked alternate rotations, one on the traditional schedule — a 30-hour shift every third night — and the other on a staggered schedule, during which the longest shift was only 16 hours. The results, published in 2004 in The New England Journal of Medicine, shocked the medical world. Interns working the traditional 30-hour shifts made 36 percent more serious medical errors, including ordering drug overdoses, missing a diagnosis of Lyme disease, trying to drain fluid from the wrong lung and administering drugs known to provoke an allergy. Thomas Nasca, the director of the accreditation council, cites this data as the single strongest argument for limiting doctors' work hours.

But this is where the neat story of the correlation between doctor fatigue and hospital error hits a wall. Landrigan's research was compelling, but his study was small and controlled. In normal, day-to-day practice in hospitals across the country, medical errors didn't fall when work hours were reduced. A massive national study of 14 million veterans and Medicare patients, published in 2009, showed no major improvement in safety after the 2003 reforms. The researchers parsed the data to see whether even a subset of hospitals improved, but the disappointing results appeared in hospitals of all sizes and all levels of academic rigor. "The fact that the policy appeared to have no impact on safety is disappointing," says David Bates, a professor at the Harvard School of Public Health and a national authority on medical errors.

Landrigan was dumbfounded. His experimental results aside, he was also moved by his own experience. When he was a resident in pediatrics at Children's Hospital in Boston, Landrigan spent every third night in the intensive-care unit working a 36-hour shift. (I was also a resident there at the time.) One night in 1996, he had just gone to sleep in a call room when a nurse burst in to say that a 9-year-old girl, who had been admitted with asthma, was deteriorating rapidly. Rather than rushing to the suffocating girl, Landrigan, dazed from fatigue, arose from bed, sauntered into a bathroom, locked the door and began brushing his teeth in a confused state. Another doctor responded and put the girl on a ventilator, saving her life.

Landrigan was working on his own large-scale study when the 2009 Medicare study came out. His team read the hospital charts for thousands of patients from 2002 to 2007. The results, published last year, were equally sobering and showed that roughly a fifth of all hospitalized patients suffered harm from medical errors; cutting trainee work hours had no impact.

The question, then, is why? There are several possible explanations for the failure of the nationwide 80-hour rule to reduce medical harms. In 2008, the journal Pediatrics reported that two-thirds of residents regularly broke the rule, suggesting that poor enforcement, perhaps related to ingrained norms, had undercut the reform. Landrigan, one of the authors of that study, also thinks that the accreditation council did not go far enough; it had not, after all, banned being on call overnight and still allowed shifts up to 30 hours. Now that the council has abolished extended shifts, at least for first-year residents, Landrigan expects fewer errors.

And yet there are reasons to believe otherwise. About 98,000 people die every year from medical errors. Some of those mistakes are made by doctors whose judgment has been scrambled by lack of sleep. But fixating on work hours has meant overlooking other issues, like lack of supervision or the failure to use more reliable computerized records. Worse still, the reforms may have created new, unexpected sources of mistakes. Shorter shifts mean doctors have less continuity with their patients. If one doctor leaves, another must take over. Work-hour reductions lead to more handoffs of patients, and the number of these handoffs is one of the strongest risk factors for error. As a result, many hospitalized patients are at the mercy of a real-life game of telephone, where a message is passed from doctor to doctor — and frequently garbled in the process.

Ted Sectish is a no-nonsense, soft-spoken pediatrician who runs the residency program at Children's Hospital in Boston and who has overseen residents for almost 20 years. To his mind, the fundamental problem is that most training programs fail to teach how to clearly convey vital information. "Patient handoffs are a nonstandardized process and a skill that's not even taught," Sectish says with dismay. (A 2006 survey found that 60 percent of residents received no training in proper handoff procedures.)

Here is a stark example of what Sectish is complaining about, from a recent study of handoffs at Yale-New Haven Hospital, in which all trainee handoffs at the hospital were recorded for two weeks and analyzed to better understand communication problems. This is a verbatim record of a trainee giving a report to the doctor coming on shift:

"O.K., so this young woman, she came in with L.F.T.'s" — liver-function tests — "in the thousands. But she also had, she had something else. O.K. Yeah, I guess it was just this. So they, I think they just think it's viral hepatitis. I don't know why she's still here. I guess they're just waiting for her L.F.T.'s to normalize again, and then they're going to send her home."

How was the on-shift trainee to make sense of that? Later that evening, the woman's blood glucose rose to dangerous levels because the handoff omitted a key fact: the medicine to keep it under control wasn't given during the day. On average, one in four sessions studied resulted in errors.

I asked Sectish if I could observe a routine shift change at Children's Hospital, so one evening in February, I accompanied him to a small conference room near the nursing station at a general pediatrics unit. Two trainees, one going off-shift and one coming on-shift, sat next to a dry-erase board on which were listed the 12 children under the team's care. This was a light census; some nights, trainees can manage up to 40 patients. There were no supervising doctors or nurses in the room, which is typical.

Both doctors had a six-page printout of the patients' names, medications, diagnoses and overnight to-do list. "Let's go in alphabetical order," the off-shift intern suggested. The first patient was a baby that was "failing to thrive," or not gaining weight. "Mostly a social issue, nothing to do overnight," he said. "What feeds?" the on-shift intern asked. "Oh, yeah — mostly purées," came the answer. The specific "social issues" weren't described.

Next on the list was a toddler with meningitis. "His potassium is low," the off-shift intern said. But his report was interrupted by a knock on the door; a consulting specialist needed to discuss a child's kidney stone. When the handoff resumed, the on-shift intern asked, "What did the kid with meningitis get for sedation in the CT scanner again?" They didn't return to the potassium problem. Twenty minutes after they started, the handoff was over.

Sectish wasn't surprised by what we saw, and he had many criticisms: patients were discussed in alphabetical order instead of severity of illness. The interns were repeatedly interrupted. Descriptions of the patients' illnesses were incomplete. The chain of responsibility was sometimes left unclear. There was no decision, for example, about who would deal with the toddler's low potassium level.

Sectish has been working to create better handoff procedures at Children's Hospital. In a three-month pilot project, young doctors were given team training, used computerized patient summaries and a structured verbal handoff (for example, always beginning with the sickest child, then a quick summary of the illness). Impressively, medical errors fell almost 40 percent and the amount of time doctors spent with patients increased. Residents throughout the hospital adopted the system this summer (the interns I saw had not yet been trained). Of course, the study was small and closely monitored, and so it's still unclear whether better handoffs will really mitigate the side effects of cutting trainees' work hours in the real world.

In 2000, the British psychologist James Reason wrote that medical systems are stacked like slices of Swiss cheese; there are holes in each system, but they don't usually overlap. An exhausted intern writes the wrong dose of a drug, but an alert pharmacist or nurse catches the mistake. Every now and then, however, all the holes align, leading to a patient's death or injury.

On a national scale, it seems safe to conclude that the efforts to cut doctors' work hours failed because the change was made in isolation. A rested doctor plugs a hole in only one slice of cheese. Holes in other layers — the frequency of patient handoffs, the continued use of antiquated pen-and-paper medical charts — remain.

In fact, Libby Zion's doctors may not have saved her even if they had been fully rested. Bertrand Bell, who ran the state commission formed to investigate her death, blames lack of supervision, not fatigue. "Supervision, not regulation of hours, is the key to improving the quality of patient care," said Bell, who is now 81 and is still an active clinician at Albert Einstein College of Medicine.

There were other holes in Zion's care too. Edward Boyer, the director of medical toxicology at the University of Massachusetts Medical School (where I am chief of pediatric cardiology) argued convincingly in The New England Journal of Medicine that Zion died from a medication-related reaction called "serotonin syndrome." This, he said, was a result of a disastrous interaction between the antidepressant phenelzine, which she had been on for weeks, and the Demerol she was given at the hospital. Though the syndrome was first described more than 50 years ago, surveys show that 85 percent of physicians have never heard of it and fewer still know the drugs that cause it. More than 7,000 people develop serotonin syndrome each year — it can occur as a side effect of common drugs like Prozac, Zoloft and over-the-counter cold medicines, and the risk jumps when offending drugs are given in combination, as they were to Zion.

I walked to our outpatient clinic to see what might happen if I prescribed both phenelzine and Demerol to a made-up patient using the clinic's electronic medical-record system. Immediately, a large box appeared with the message, "This combination of drugs is associated with a potentially fatal adverse reaction." Then I went over to the inpatient wards, where, as in roughly two-thirds of American hospitals, there is no computerized prescribing system. Nothing would prevent me from writing the orders for Demerol on the paper chart. Were Libby Zion admitted to a typical hospital today, no matter how rested her doctor was, the same error that killed her could happen again.

Zion's death was never just a cautionary tale about the dangers of sleep deprivation in doctors. It was a nuanced saga about many layers of flawed health care practices, of which work hours were only the most apparent. What the persistence of errors tells us is not that we should repeal the work restrictions and return to a state of sleep deprivation, which is what most of the nation's residency directors would have us do. It instead suggests that we should redouble efforts to fix the remaining problems in medical training. In fact, Landrigan's seminal study succeeded precisely because he didn't merely shorten trainee work hours. A careful reading of his report reveals that he also created a clearer and more systematic handoff procedure and enhanced supervision by senior physicians in the intensive-care unit.

But all of these hospital reforms ignore what may be the biggest problem in physician training today: the yawning chasm between what most doctors learn during the 80 hours a week they spend training in hospitals and what they actually do after leaving their residencies. Defenders of the old-school way argue that the demands of medical practice justify the brutal hours. But after their residencies, most doctors practice in outpatient settings and work regular daytime hours as members of large groups. They treat chronic problems that need weeks or months of periodic outpatient follow-up, not high-intensity hospital-based care lasting only a few days.

"For people who came out of the old training system, it may be hard to imagine one that works better," says Donald Berwick, the director of the Centers for Medicare and Medicaid Services and former president of the Institute for Healthcare Improvement. "The point is, it's all about design and coming up with optimizing models."

Some researchers are trying small-scale innovative designs. Johns Hopkins Medical School, for example, hired professional "hospitalists" to work full time in the inpatient wards. This freed up trainees to concentrate on a smaller number of patients. Though they work fewer hours, trainees now spend more time with patients, make house calls after people are discharged and learn outpatient care for chronic problems. David Hellmann, who created the program, says the model cut heart failure readmissions by two-thirds, which offset the costs of the additional staff members.

Think about Libby Zion's medical care. Two months before her fateful admission, a doctor prescribed phenelzine. A month later, her dentist gave her narcotics after a tooth extraction. A few days later, yet another doctor prescribed an antibiotic and antihistamine for a possible ear infection. According to the grand jury report, she was also at some point prescribed two other antibiotics, the sedative Valium, a sleeping pill and yet another antidepressant — almost all of which can further worsen serotonin syndrome when used with phenelzine. For days at her apartment, she suffered from chills, body aches and joint pains that weren't investigated or treated by any of her doctors, who never talked to one another. Before she was hospitalized, she was victimized by an absurdly fragmented outpatient system.

Imagine if Zion's doctors had been better trained to treat her chronic depression, made regular follow-up phone calls to their patient, kept better records, coordinated her drugs to prevent serotonin syndrome. Perhaps they could have avoided her sudden deterioration in the first place, and no hospital trainee, sleep deprived or wide awake, would ever have seen her.

Darshak Sanghavi ( is the chief of pediatric cardiology at UMass Medical School and Slate's health care columnist.

Thursday, August 4, 2011

Amazon: The Mystique of Surgery and the Surgeons Who Perform Them - Dr. David Gelber

Behind the Mask: The Mystique of Surgery and the Surgeons Who Perform Them takes the reader on a journey through the mind of a surgeon; presenting a viewpoint that touches on the triumphs and failures surgeons face every day. The book presents a series of essays that progress from preoperative considerations to the intraoperative experience and finally to the postoperative perspective. I have tried to make it accessible to the nonmedical reader as well as the physician or nurse. There is a glossary of terms that I hope will be helpful and I have tried to explain the medical terms and conditions as plainly as possible.

Surgery, presented from the surgeon's perspective, is something most people, including many non surgical physicians, never get to see or read about. The unexpected twists and turns that a patient can take after even the simplest procedure presents a challenge that mandates constant vigilance by the surgeon and his staff. At the same time, sometimes a surgeon knows, by having just rooted around inside a patient, that everything will be okay. Training, experience, insight, knowledge, and more than a touch of humility combine to create a surgeon.

Each chapter examines a particular aspect of surgery; they are loosely arranged with an introductory section, a section dealing with preoperative considerations, followed by the intraoperative experience and, finally, postoperative considerations. The final section takes a look at the lighter side of surgery and medicine. All the anecdotes and patient scenarios presented are true to my best recollection, but all the names have been changed. I hope these pages will help remove some of the mystery that surrounds the practice of surgery. I hope the reader comes away with new understanding of what it means to be a surgeon and, perhaps, to be a patient.

Tuesday, August 2, 2011

Who Succumbs to Addiction, and Who Is Left Unscathed? -

Shortly after the singer Amy Winehouse, 27, was found dead in her London home, the airwaves were ringing with her popular hit "Rehab," a song about her refusal to be treated for drug addiction.

The man said "Why you think you here?"

I said, "I got no idea."

I'm gonna, gonna lose my baby,

So I always keep a bottle near.

The official cause of Ms. Winehouse's death won't be announced until October pending toxicology reports, but her highly publicized battle with alcohol and drug addiction seems to have played a significant role. Indeed, her mother echoed a sentiment heard everywhere when she told The Sunday Mirror that her daughter's death was "only a matter of time."

But was it? Why is it that some people survive drug and alcohol abuse, even manage their lives with it, while others succumb to addiction? It's a question scientists have been wrestling with for decades, but only recently have they begun to find answers.

Illicit drug use in the United States, as in Britain, is very common and usually begins in adolescence. According to the 2008 National Survey of Drug Use and Health, 46 percent of Americans have tried an illicit drug at some point in their lives. But only 8 percent have used an illicit drug in the past month. By comparison, 51 percent have used alcohol in the past year.

Most people who experiment with drugs, then, do not become addicted. So who is at risk?

Clinicians have long been aware that patients with certain types of psychiatric illnesses — including mood, anxiety and personality disorders — are more likely to become addicts. According to the National Institute of Mental Health's Epidemiologic Catchment Area Study, patients with mental health problems are nearly three times as likely to have an addictive disorder as those without.

Conversely, 60 percent of people with a substance abuse disorder also suffer from another form of mental illness. Still, it's unclear whether addiction predisposes someone to mental illness, or vice versa.

Scientists do know that having a mental illness doesn't just increase the chance of intermittent drug abuse; it also significantly raises the risk of outright dependence and addiction. The conventional wisdom is the link represents a form of "self-medication" — that is, people are using drugs long-term to medicate their own misery.

There is clinical and epidemiologic evidence to support this notion. Alcohol and drugs affect mood and behavior by activating the same brain circuits that are disrupted in major psychiatric illnesses. No surprise, then, that depressed and anxious patients in particular turn to alcohol and other sedatives. But these substances are terrible antidepressants and only worsen the underlying problem, leading to a downward spiral of depression and addiction.

Certain personality disorders also raise the odds of drug abuse and alcohol abuse. Narcissistic patients, who constantly battle feelings of inadequacy, are frequently drawn to stimulants, like cocaine, that provide a fleeting sense of power and self-confidence. People withborderline personality disorder, who struggle to control their impulses and anger, often resort to drugs and alcohol to soften their intolerable moods.

But precarious mental health is not the only risk for long-term addiction. Emerging evidence suggests that drug abuse can be a developmental brain disorder, and that people who become addicted are wired differently from those who do not.

Dr. Nora Volkow, director of the National Institute on Drug Abuse, has shown in several brain-imaging studies that people addicted to such drugs as cocaine, heroin and alcohol have fewer dopamine receptors in the brain's reward pathways than nonaddicts. Dopamine is a neurotransmitter critical to the experience of pleasure and desire, and sends a signal to the brain: Pay attention, this is important.

When Dr. Volkow compared the responses of addicts and normal controls with an infusion of a stimulant, she discovered that controls with high numbers of D2 receptors, a subtype of dopamine receptors, found it aversive, while addicts with low receptor levels found it pleasurable.

This finding and others like it suggest that drug addicts may have blunted reward systems in the brain, and that for them everyday pleasures don't come close to the powerful reward of drugs. There is some intriguing evidence that there is an increase in D2 receptors in addicts who abstain from drugs, though we don't yet know if they fully normalize with time.

But people are not brains in a jar; we are heavily influenced by our environments, too. The world in which Ms. Winehouse traveled appears to have been awash in illicit drugs and alcohol whose use was not just accepted but encouraged. Even people who aren't wired for addiction can become dependent on drugs and alcohol if they are constantly exposed to them, studies have found.

Drug use changes the brain. Primates that aren't predisposed to addiction will become compulsive users of cocaine as the number of D2 receptors declines in their brains, Dr. Volkow noted. And one way to produce such a decline, she has found, is to place the animals in stressful social situations.

A stressful environment in which there is ready access to drugs can trump a low genetic risk of addiction in these animals. The same may be true for humans, too. And that's a notion many find hard to believe: Just about anyone, regardless of baseline genetic risk, can become an addict under the right circumstances.

It also has profound implications for intervention and treatment. Long-term drug use usually begins during adolescence, a time when the brain is the most plastic.

In those who are most vulnerable, substance abuse must be confronted early in adolescence, before it has set the stage for a lifetime of addiction.

Who can experiment uneventfully with drugs and who will be undone by them results from a complex interplay of genes, environment and psychology. And, unfortunately, just plain chance.

Monday, August 1, 2011

Screening has little impact on breast cancer deaths | Reuters

Falling breast cancer death rates have little to do with breast screening but are down to better treatment and health systems, scientists said on Friday, in a study likely to fuel a long-running row over the merits of mammograms.

Researchers analyzed data from three pairs of countries in Europe and found that although breast cancer screening programs had been introduced 10 to 15 years earlier in some areas than in others, declines in death rates were similar.

The findings suggest that "improvements in treatment and in the efficiency of healthcare systems may be more plausible explanations" for falling deaths rates from breast cancer, they wrote in a study in the British Medical Journal.

World Health Organization (WHO) data show that deaths from breast cancer are decreasing in the United States, Australia, and most Nordic and western European countries but breast screening is a hot topic among experts who disagree about whether nationwide mammogram programs do more harm than good.

The fear among some is that over-diagnosis -- when screening picks up tumors that would never have presented a problem -- may mean many women are undergoing unnecessary radical treatment, suffering the physical and psychological impact of a breast cancer diagnosis that would otherwise not have come up.

But sweeping changes in U.S. guidelines two years ago that scaled back recommendations on breast screening caused an uproar among patient and doctors groups who said they put women at risk. That was swiftly followed by two conflicting European studies which further fueled the row.

The first, by Danish scientists, found that breast cancer screening programs of the type run by health services in Europe, the United States and other rich nations do nothing to reduce death rates from the disease, while the second, by a British team, found "substantial and significant reduction in breast cancer deaths" due to screening.

Then last month, researchers who conducted the longest ever breast cancer screening said it showed that regular mammograms prevent deaths from breast cancer, and that the number of lives saved increases over time.

Every year, breast cancer kills around 500,000 people globally and is diagnosed in close to 1.3 million people.

For this study, researchers from Britain, France and Norway used WHO data to compare trends in breast cancer death rates within three pairs of countries - Northern Ireland versus Republic of Ireland, the Netherlands versus Belgium and Flanders, and Sweden versus Norway.

Each pair had similar healthcare services and similar levels of risk factors for breast cancer mortality, but were different in that mammography screening was implemented about 10 to 15 years later in the second country of each pair.

The team, lead by Philippe Autier of the International Prevention Research Institute in Lyon, France, said they expected that reductions in breast cancer death rates would show up earlier in countries where screening was introduced sooner, but their analysis in fact showed little difference.

The findings showed that from 1989 to 2006, deaths from breast cancer fell by 29 percent in Northern Ireland and 26 Percent in the Republic of Ireland; by 25 percent in the Netherlands, 20 percent in Belgium and 25 percent in Flanders; and by 16 percent in Sweden and 24 percent in Norway.

"Trends in breast cancer mortality rates varied little between countries where women had been screened by mammography for a considerable time compared with those where women were largely unscreened," Autier's team wrote.

"This is in sharp contrast with the temporal difference of 10 to 15 years in implementation of mammography screening and suggests that screening has not played a direct part in the reductions of breast cancer mortality."