Some links and readings posted by Gary B. Rollman, Emeritus Professor of Psychology, University of Western Ontario
Saturday, September 22, 2012
The Wilson Quarterly: Beyond the Brain
The drugs don't work: a modern medical scandal | Ben Goldacre | Business | The Guardian
But we had both been misled. In October 2010, a group of researchers was finally able to bring together all the data that had ever been collected on reboxetine, both from trials that were published and from those that had never appeared in academic papers. When all this trial data was put together, it produced a shocking picture. Seven trials had been conducted comparing reboxetine against a placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal, for doctorsand researchers to read. But six more trials were conducted, in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials was published. I had no idea they existed.
It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients' worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn't bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature; but when we saw the unpublished studies, it turned out that patients were more likely to have side-effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side-effects, if they were taking reboxetine rather than one of its competitors.
I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them, I discussed them with the patient and we made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill and, worse, it does more harm than good. As a doctor, I did something that, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.
Nobody broke any law in that situation, reboxetine is still on the market and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us. These problems have been protected from public scrutiny because they're too complex to capture in a soundbite. This is why they've gone unfixed by politicians, at least to some extent; but it's also why it takes detail to explain. The people you should have been able to trust to fix these problems have failed you, and because you have to understand a problem properly in order to fix it, there are some things you need to know.
Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don't like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug's true effects. Regulators see most of the trial data, but only from early on in a drug's life, and even then they don't give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.
In their 40 years of practice after leaving medical school, doctors hear about what works ad hoc, from sales reps, colleagues and journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are, too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it's not in anyone's financial interest to conduct any trials at all.
Now, on to the details.
In 2010, researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – then measured two key features: were they positive, and were they funded by industry? They found more than 500 trials in total: 85% of the industry-funded studies were positive, but only 50% of the government-funded trials were. In 2007, researchers looked at every published trial that set out to explore the benefits of a statin. These cholesterol-lowering drugs reduce your risk of having a heart attack and are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. They found that industry-funded trials were 20 times more likely to give results favouring the test drug.
These are frightening results, but they come from individual studies. So let's consider systematic reviews into this area. In 2003, two were published. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies in the intervening four years: it found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.
It turns out that this pattern persists even when you move away from published academic papers and look instead at trial reports from academic conferences. James Fries and Eswar Krishnan, at the Stanford University School of Medicine in California, studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor's drug.
In general, the results section of an academic paper is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The "ranges" are given, subgroups are explored, statistical tests conducted, and each detail is described in table form, and in shorter narrative form in the text. This lengthy process is usually spread over several pages. In Fries and Krishnan (2004), this level of detail was unnecessary. The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence:
"The results from every randomised controlled trial (45 out of 45) favoured the drug of the sponsor."
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good. But after all these methodological quirks comes one very simple insult to the integrity of the data. Sometimes, drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them.
Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? No one can tell. Is this expensive drug worth the money, or has the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it's dangerous? No one can tell. This is a bizarre situation to arise in medicine, a discipline in which everything is supposed to be based on evidence.
And this data is withheld from everyone in medicine, from top to bottom. Nice, for example, is the National Institute for Health and Clinical Excellence, created by the British government to conduct careful, unbiased summaries of all the evidence on new treatments. It is unable either to identify or to access data on a drug's effectiveness that's been withheld by researchers or companies: Nice has no more legal right to that data than you or I do, even though it is making decisions about effectiveness, and cost-effectiveness, on behalf of the NHS, for millions of people.
In any sensible world, when researchers are conducting trials on a new tablet for a drug company, for example, we'd expect universal contracts, making it clear that all researchers are obliged to publish their results, and that industry sponsors – which have a huge interest in positive results – must have no control over the data. But, despite everything we know about industry-funded research being systematically biased, this does not happen. In fact, the opposite is true: it is entirely normal for researchers and academics conducting industry-funded trials to sign contracts subjecting them to gagging clauses that forbid them to publish, discuss or analyse data from their trials without the permission of the funder.
This is such a secretive and shameful situation that even trying to document it in public can be a fraught business. In 2006, a paper was published in the Journal of the American Medical Association (Jama), one of the biggest medical journals in the world, describing how common it was for researchers doing industry-funded trials to have these kinds of constraints placed on their right to publish the results. The study was conducted by the Nordic Cochrane Centre and it looked at all the trials given approval to go ahead in Copenhagen and Frederiksberg. (If you're wondering why these two cities were chosen, it was simply a matter of practicality: the researchers applied elsewhere without success, and were specifically refused access to data in the UK.) These trials were overwhelmingly sponsored by the pharmaceutical industry (98%) and the rules governing the management of the results tell a story that walks the now familiar line between frightening and absurd.
For 16 of the 44 trials, the sponsoring company got to see the data as it accumulated, and in a further 16 it had the right to stop the trial at any time, for any reason. This means that a company can see if a trial is going against it, and can interfere as it progresses, distorting the results. Even if the study was allowed to finish, the data could still be suppressed: there were constraints on publication rights in 40 of the 44 trials, and in half of them the contracts specifically stated that the sponsor either owned the data outright (what about the patients, you might say?), or needed to approve the final publication, or both. None of these restrictions was mentioned in any of the published papers.
When the paper describing this situation was published in Jama, Lif, the Danish pharmaceutical industry association, responded by announcing, in the Journal of the Danish Medical Association, that it was "both shaken and enraged about the criticism, that could not be recognised". It demanded an investigation of the scientists, though it failed to say by whom or of what. Lif then wrote to the Danish Committee on Scientific Dishonesty, accusing the Cochrane researchers of scientific misconduct. We can't see the letter, but the researchers say the allegations were extremely serious – they were accused of deliberately distorting the data – but vague, and without documents or evidence to back them up.
Nonetheless, the investigation went on for a year. Peter Gøtzsche, director of the Cochrane Centre, told the British Medical Journal that only Lif's third letter, 10 months into this process, made specific allegations that could be investigated by the committee. Two months after that, the charges were dismissed. The Cochrane researchers had done nothing wrong. But before they were cleared, Lif copied the letters alleging scientific dishonesty to the hospital where four of them worked, and to the management organisation running that hospital, and sent similar letters to the Danish medical association, the ministry of health, the ministry of science and so on. Gøtzsche and his colleagues felt "intimidated and harassed" by Lif's behaviour. Lif continued to insist that the researchers were guilty of misconduct even after the investigation was completed.
Paroxetine is a commonly used antidepressant, from the class of drugs known as selective serotonin reuptake inhibitors or SSRIs. It's also a good example of how companies have exploited our long-standing permissiveness about missing trials, and found loopholes in our inadequate regulations on trial disclosure.
To understand why, we first need to go through a quirk of the licensing process. Drugs do not simply come on to the market for use in all medical conditions: for any specific use of any drug, in any specific disease, you need a separate marketing authorisation. So a drug might be licensed to treat ovarian cancer, for example, but not breast cancer. That doesn't mean the drug doesn't work in breast cancer. There might well be some evidence that it's great for treating that disease, too, but maybe the company hasn't gone to the trouble and expense of getting a formal marketing authorisation for that specific use. Doctors can still go ahead and prescribe it for breast cancer, if they want, because the drug is available for prescription, it probably works, and there are boxes of it sitting in pharmacies waiting to go out. In this situation, the doctor will be prescribing the drug legally, but "off-label".
Now, it turns out that the use of a drug in children is treated as a separate marketing authorisation from its use in adults. This makes sense in many cases, because children can respond to drugs in very different ways and so research needs to be done in children separately. But getting a licence for a specific use is an arduous business, requiring lots of paperwork and some specific studies. Often, this will be so expensive that companies will not bother to get a licence specifically to market a drug for use in children, because that market is usually much smaller.
So it is not unusual for a drug to be licensed for use in adults but then prescribed for children. Regulators have recognised that this is a problem, so recently they have started to offer incentives for companies to conduct more research and formally seek these licences.
When GlaxoSmithKline applied for a marketing authorisation in children for paroxetine, an extraordinary situation came to light, triggering the longest investigation in the history of UK drugs regulation. Between 1994 and 2002, GSK conducted nine trials of paroxetine in children. The first two failed to show any benefit, but the company made no attempt to inform anyone of this by changing the "drug label" that is sent to all doctors and patients. In fact, after these trials were completed, an internal company management document stated: "It would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the profile of paroxetine." In the year after this secret internal memo, 32,000 prescriptions were issued to children for paroxetine in the UK alone: so, while the company knew the drug didn't work in children, it was in no hurry to tell doctors that, despite knowing that large numbers of children were taking it. More trials were conducted over the coming years – nine in total – and none showed that the drug was effective at treating depression in children.
It gets much worse than that. These children weren't simply receiving a drug that the company knew to be ineffective for them; they were also being exposed to side-effects. This should be self-evident, since any effective treatment will have some side-effects, and doctors factor this in, alongside the benefits (which in this case were nonexistent). But nobody knew how bad these side-effects were, because the company didn't tell doctors, or patients, or even the regulator about the worrying safety data from its trials. This was because of a loophole: you have to tell the regulator only about side-effects reported in studies looking at the specific uses for which the drug has a marketing authorisation. Because the use of paroxetine in children was "off-label", GSK had no legal obligation to tell anyone about what it had found.
People had worried for a long time that paroxetine might increase the risk of suicide, though that is quite a difficult side-effect to detect in an antidepressant. In February 2003, GSK spontaneously sent the MHRA a package of information on the risk of suicide on paroxetine, containing some analyses done in 2002 from adverse-event data in trials the company had held, going back a decade. This analysis showed that there was no increased risk of suicide. But it was misleading: although it was unclear at the time, data from trials in children had been mixed in with data from trials in adults, which had vastly greater numbers of participants. As a result, any sign of increased suicide risk among children on paroxetine had been completely diluted away.
Later in 2003, GSK had a meeting with the MHRA to discuss another issue involving paroxetine. At the end of this meeting, the GSK representatives gave out a briefing document, explaining that the company was planning to apply later that year for a specific marketing authorisation to use paroxetine in children. They mentioned, while handing out the document, that the MHRA might wish to bear in mind a safety concern the company had noted: an increased risk of suicide among children with depression who received paroxetine, compared with those on dummy placebo pills.
This was vitally important side-effect data, being presented, after an astonishing delay, casually, through an entirely inappropriate and unofficial channel. Although the data was given to completely the wrong team, the MHRA staff present at this meeting had the wit to spot that this was an important new problem. A flurry of activity followed: analyses were done, and within one month a letter was sent to all doctors advising them not to prescribe paroxetine to patients under the age of 18.
How is it possible that our systems for getting data from companies are so poor, they can simply withhold vitally important information showing that a drug is not only ineffective, but actively dangerous? Because the regulations contain ridiculous loopholes, and it's dismal to see how GSK cheerfully exploited them: when the investigation was published in 2008, it concluded that what the company had done – withholding important data about safety and effectiveness that doctors and patients clearly needed to see – was plainly unethical, and put children around the world at risk; but our laws are so weak that GSK could not be charged with any crime.
After this episode, the MHRA and EU changed some of their regulations, though not adequately. They created an obligation for companies to hand over safety data for uses of a drug outside its marketing authorisation; but ridiculously, for example, trials conducted outside the EU were still exempt. Some of the trials GSK conducted were published in part, but that is obviously not enough: we already know that if we see only a biased sample of the data, we are misled. But we also need all the data for the more simple reason that we need lots of data: safety signals are often weak, subtle and difficult to detect. In the case of paroxetine, the dangers became apparent only when the adverse events from all of the trials were pooled and analysed together.
That leads us to the second obvious flaw in the current system: the results of these trials are given in secret to the regulator, which then sits and quietly makes a decision. This is the opposite of science, which is reliable only because everyone shows their working, explains how they know that something is effective or safe, shares their methods and results, and allows others to decide if they agree with the way in which the data was processed and analysed. Yet for the safety and efficacy of drugs, we allow it to happen behind closed doors, because drug companies have decided that they want to share their trial results discretely with the regulators. So the most important job in evidence-based medicine is carried out alone and in secret. And regulators are not infallible, as we shall see.
Rosiglitazone was first marketed in 1999. In that first year, Dr John Buse from the University of North Carolina discussed an increased risk of heart problems at a pair of academic meetings. The drug's manufacturer, GSK, made direct contact in an attempt to silence him, then moved on to his head of department. Buse felt pressured to sign various legal documents. To cut a long story short, after wading through documents for several months, in 2007 the US Senate committee on finance released a report describing the treatment of Buse as "intimidation".
But we are more concerned with the safety and efficacy data. In 2003 the Uppsala drug monitoring group of the World Health Organisation contacted GSK about an unusually large number of spontaneous reports associating rosiglitazone with heart problems. GSK conducted two internal meta-analyses of its own data on this, in 2005 and 2006. These showed that the risk was real, but although both GSK and the FDA had these results, neither made any public statement about them, and they were not published until 2008.
During this delay, vast numbers of patients were exposed to the drug, but doctors and patients learned about this serious problem only in 2007, when cardiologist Professor Steve Nissen and colleagues published a landmark meta-analysis. This showed a 43% increase in the risk of heart problems in patients on rosiglitazone. Since people with diabetes are already at increased risk of heart problems, and the whole point of treating diabetes is to reduce this risk, that finding was big potatoes. Nissen's findings were confirmed in later work, and in 2010 the drug was either taken off the market or restricted, all around the world.
Now, my argument is not that this drug should have been banned sooner because, as perverse as it sounds, doctors do often need inferior drugs for use as a last resort. For example, a patient may develop idiosyncratic side-effects on the most effective pills and be unable to take them any longer. Once this has happened, it may be worth trying a less effective drug if it is at least better than nothing.
The concern is that these discussions happened with the data locked behind closed doors, visible only to regulators. In fact, Nissen's analysis could only be done at all because of a very unusual court judgment. In 2004, when GSK was caught out withholding data showing evidence of serious side-effects from paroxetine in children, their bad behaviour resulted in a US court case over allegations of fraud, the settlement of which, alongside a significant payout, required GSK to commit to posting clinical trial results on a public website.
Nissen used the rosiglitazone data, when it became available, and found worrying signs of harm, which they then published to doctors – something the regulators had never done, despite having the information years earlier. If this information had all been freely available from the start, regulators might have felt a little more anxious about their decisions but, crucially, doctors and patients could have disagreed with them and made informed choices. This is why we need wider access to all trial reports, for all medicines.
Missing data poisons the well for everybody. If proper trials are never done, if trials with negative results are withheld, then we simply cannot know the true effects of the treatments we use. Evidence in medicine is not an abstract academic preoccupation. When we are fed bad data, we make the wrong decisions, inflicting unnecessary pain and suffering, and death, on people just like us.
http://www.guardian.co.uk/business/2012/sep/21/drugs-industry-scandal-ben-goldacre
Friday, September 21, 2012
Going Gently Into That Good Night | Narratively
If you're dying and don't care to wait around for death, you can always book your own appointment. One simple way to do this would be to stop eating and drinking; another would be to stop life-sustaining medicine or devices. Assuming you can decide on your own, both of these methods are good and kosher as far as the law goes. A third approach, however, ventures into a grayer area of legal and ethical terrain—quaffing a lethal cocktail. In the business of ending your life, the means matter a lot more than the final result.
These were three things my mother, Ann Krieger, was pondering when she reached the final leg of her terminal illness last year, a month before Mother's Day. After several years of fighting colon cancer, her doctor broke the news that the cancer had spread and the treatment was no longer working. There was no more they could do.
"You've got months, not weeks," he said.
"What should I do?" she asked. "Should I end it now?"
"No," he said. "You don't want to do that."
Actually, my mother kind of did, but the doctor referred her to hospice and gave her information about palliative care, a mode of treatment that relieves the pain of patients with serious illnesses. But in my mother's case, the physical distress was less acute than the existential. Coming to terms with the fact that you're going to die is elusive. For some people, like her, an attempt to manage the logistics could make it seem more doable. She and my father had given this some thought and had very specific ideas about how they wanted their end-of-life matters handled.
Six years earlier, horrified by what was taking place with Terri Schiavo in Florida, they sat my sister and me down to give us instructions. Should it ever come down to it, my parents told us, they wanted no artificial resuscitation, experimental procedures, machines or IVs—none of that stuff. They just wanted us to make sure they would be allowed to die naturally. "The idea," my father explained to me recently, "is to be pain-free, comfortable and not go through a lot of unnecessary, costly and painful treatments which won't help anyway."
More ...
Anesthesia death rates improve over 50 years - Health - CBC News
And that improvement occurred at a time when patients who were undergoing surgery were, in general terms, sicker, and the surgeries increasingly more complex.
"You can't point to one thing," said Dr. Daniel Bainbridge, an anesthesiologist and associate professor in the department of anesthesiology and peri-operative medicine at the University of Western Ontario.The lead author of the study said a variety of factors have contributed to the improvement in surgical survival.
"I'm sure it's better drugs. It's better training of our residents. It's better operating room environments, cleaner environments. Better equipment. It's an understanding about safety and culture safety and avoiding drug errors."
The study, published in this week's issue of the journal The Lancet, was undertaken to see whether advances in the science of anesthetizing people and improved surgical safety procedures were actually translating into fewer deaths in operating rooms.
With more than 230 million major surgeries occurring annually around the world, the stakes are high.
Bainbridge and his group explored the issue by amalgamating data from 87 studies other researchers had done to try to get a global picture of what had been happening over the past few decades to rates of deaths during or immediately after surgery.
The patient pool in the combined studies represents 21.4 million times people were administered general anesthetic for surgery.
Prior to 1970, 357 people per million surgeries died from receiving anesthetic, according to the study. That dropped to 34 people per million in the 1990s and 2000s. Deaths solely due to anesthesia can be caused by allergic reactions to the drugs or by errors on the part of the doctors administering them.
In terms of deaths during or in the day or two after surgery, before the 1970s, 10,603 people died out of every million surgeries.
But by the 1990s and 2000s, deaths during or within 24 to 48 hours of surgery dropped to 1,176 people per million worldwide, the research suggested.
There have been high profile steps taken in recent years to improve the safety record of surgeries, such as the adoption of the safe surgery checklist.
The checklist, which was inspired by a similar and successful safety initiative instituted by the airline industry, standardizes safety checks that operating room teams should make before, during and after every surgery.
But interestingly, this study shows that survival rates began improving before the surgical checklist was designed. "Although we're still improving, we've been improving for 20 or 30 years," Bainbridge said in an interview.
A commentary published with the study issued a note of caution. Before surgical teams become complacent, the authors suggested, they should keep in mind that the follow-up period was short — two days or less. If the studies Bainbridge and his colleagues pooled had looked at deaths a month out from surgery, for example, the picture might have looked different.
"We know from large epidemiological studies that 30-day all-cause perioperative mortality for patients undergoing a broad range of in-patient surgeries remains between one per cent and two per cent," wrote Michael Avidan and Sachin Kheterpal.
That compares to a rate of 0.12 per cent in the 1990s and 2000s in the Bainbridge study.
Avidan and Kheterpal — who are with, respectively, the anesthesiology departments of the medical schools of Washington University in St. Louis, Mo. and the University of Michigan, at Ann Arbor — said it could be that some deaths that are the result of surgeries are now simply happening further out from the procedure.
"There probably remains an unexpected and poorly measured pandemic of perioperative mortality, which is an unappreciated public health concern," they suggested.
But the way the Bainbridge team did this work — collating data gathered by other researchers — means they can only work with the data those studies contained.
And following surgical patients for longer periods becomes complex, Bainbridge admitted. If a patient who had surgery develops pneumonia from the hospital stay and dies six weeks later, is that a surgical death? A hospital-acquired infection death?
The farther out you go, the harder it is to tease out which deaths were due to surgery and which were due to the health status of the person who had the procedure or some mistake in post-operative care.
"Twenty-four to 48 hours [follow-up] will give you a pretty good idea whether you had a catastrophic event in the OR that would have been caused by anesthesia, surgery or a combination," Bainbridge said.
While the findings of the study are good news, the picture is more favourable in some parts of the worlds than it is in others.
The declines in deaths were greatest and most consistent in developed countries, Bainbridge and his team found. In fact, rates of deaths due to anesthesia or other surgical complications were two to three times higher in developing countries than in developed nations.
The paper suggested the surgical community worldwide should work to close that gap, either through training programs or donations of operating room equipment such as pulse oximeters, which monitor the oxygen levels in a patient's blood.
http://www.cbc.ca/news/health/story/2012/09/21/surgery-survival.html
Thursday, September 20, 2012
Transplant Experts Blame Allocation System for Discarding Kidneys - NYTimes.com
Last year, 4,720 people died while waiting for kidney transplants in the United States. And yet, as in each of the last five years, more than 2,600 kidneys were recovered from deceased donors and then discarded without being transplanted, government data show.
Those organs typically wound up in a research laboratory or medical waste incinerator.
In many instances, organs that seemed promising for transplant based on the age and health of the donor were discovered to have problems that made them not viable.
But many experts agree that a significant number of discarded kidneys — perhaps even half, some believe — could be transplanted if the system for allocating them better matched the right organ to the right recipient in the right amount of time.
The current process is made inefficient, they say, by an outdated computer matching program, stifling government oversight, the overreliance by doctors on inconclusive tests and even federal laws against age discrimination. The result is a system of medical rationing that arguably gives all candidates a fair shot at a transplant but that may not save as many lives as it could.
"There is no doubt that organs that can help somebody and have a survival benefit are being discarded every day," said Dr. Dorry Segev, a transplant surgeon at Johns Hopkins University School of Medicine.
For 25 years, the wait list for deceased donor kidneys — which stood at 93,413 on Wednesday — has remained stubbornly rooted in a federal policy that amounts largely to first come first served. As designed by the government's Organ Procurement and Transplantation Network, which is managed under federal contract by the nonprofit United Network for Organ Sharing, the system is considered simple and transparent. But many in the field argue that it wastes precious opportunities for transplants.
One recent computer simulation, by researchers with the Scientific Registry of Transplant Recipients, projected that a redesigned system could add 10,000 years of life from just one year of transplants.
Currently, the country is divided into 58 donation districts. When a deceased donor kidney becomes available, the transplant network's rules dictate that it is first offered to the compatible candidate within the district who has waited the longest. Additional priority is given to children, to candidates whose blood chemistry makes them particularly difficult to match and to those who are particularly well matched to the donor. If no taker is found locally, the electronic search expands to the region and eventually goes national.
The kidney matching system does not, however, consider the projected life expectancy of the recipient or the urgency of the transplant. By contrast, the systems for allocating livers, hearts and lungs have been revised to weigh those factors.
As a result, kidneys that might function for decades can be routed to elderly patients with only a few years to live. And when older, lower-quality kidneys become available, candidates atop the list and their doctors can simply turn them down and wait for better organs. If that happens too often, doctors say, a kidney can develop a self-fulfilling reputation as an unwanted organ.
Complicating matters is a race against the clock that starts as soon as a kidney is recovered and placed on ice for evaluation. Because kidneys start to degrade during this "cold ischemic time," surgeons typically hope to transplant them within 24 to 36 hours.
But that short window can be devoured by testing, searches for a recipient and long drives or flights to transport the kidney. The organ procurement organization in each district is allowed to make offers to only a few hospitals at a time — usually three to five — and the hospitals have an hour to respond.
Missed Opportunities
It is not precisely clear how often kidneys are discarded that might be useful.
Last year 2,644 of the 14,784 kidneys recovered were discarded, or nearly 18 percent, according to the United Network for Organ Sharing. About one-fifth of those discarded kidneys — nearly 500 — were not transplanted because a recipient could not be found.
But transplant statisticians say that record-keeping is imprecise. And some authorities, like Dr. Barry M. Straube, a nephrologist who served for six years as Medicare's chief medical officer, and Dr. Robert J. Stratta, the director of transplantation at Wake Forest School of Medicine, speculate that as many as half of discarded kidneys could be transplanted.
"I think you could argue about how many missed opportunities there are," said Dr. Alan B. Leichtman, a nephrologist at the University of Michigan. "But not that there are missed opportunities."
Last October, a ticking clock apparently forced doctors to discard one of the kidneys donated by Judith Kurash, 72, who died in a Twin Cities-area hospital after suffering a brain aneurysm.
Surgeons successfully transplanted her liver. Her heart went to research. But given Ms. Kurash's age and history of hypertension, finding recipients for her kidneys proved challenging.
They were turned down by five area hospitals, six Midwestern ones and then 37 others nationwide, before finally being accepted by a center on the East Coast, according to LifeSource, the organ procurement organization in St. Paul. Although testing showed the kidneys to be similar, one was transplanted, while the other was not.
The East Coast hospital declined to be identified or comment on the case. But Meg Rogers, LifeSource's director of organ procurement, said the hospital reported that Ms. Kurash's right kidney had "timed out" after spending at least 24 hours on ice.
"Unfortunately, once that kidney is recovered, time isn't on our side," Ms. Rogers said. "It sometimes takes all the stars aligning."
Although pleased that any of his mother's organs had been placed, Terry Kurash said the successful transplant of one kidney, to a 58-year-old man, raised the question of why the second had been discarded.
"You'd like to see the most efficient process that allows the most organs to get to the most recipients," Mr. Kurash said.
More than half of discarded kidneys come from older donors like Ms. Kurash whose age and health problems may have made them marginal for transplant. But in 2011, nearly 1,000 discarded kidneys came from donors who were younger than 60, according to the organ sharing network.
So it was in March, when a nationwide computer search failed to produce a taker for one of the kidneys donated by Frank D. Duncan, a fit 36-year-old who succumbed to smoke inhalation from an early-morning electrical fire at his house in Memphis (his wife and two young sons escaped).
Mr. Duncan's widow, Catherine, said she had received notice from the Mid-South Transplant Foundation, the local organ procurement organization, that his liver had been transplanted into a 47-year-old man and that his left kidney had gone to a 36-year-old woman.
But despite making offers to nearly 10,000 potential matches, the agency did not find a candidate willing to take Mr. Duncan's right kidney, said Kim Van Frank, Mid-South's executive director.
The failure to place the second kidney, which was discarded, confounded Ms. Duncan. "You've got all these people all over the country that are waiting for one," she said, "and here you've got this perfectly good kidney."
Success at a Cost
The number of kidneys discarded each year has grown 76 percent over the last decade, more than twice as fast as the increase in kidney recoveries. Clearly, revamping the allocation system would help shorten the wait list.
But given that the list has grown 30 percent in five years, transplant officials say that more must also be done to encourage people to register as donors, increase donor registration rates, remove financial and logistical obstacles and narrow extreme differences in wait list time among states.
There are any number of reasons a doctor might turn away a kidney. But there is growing concern that those decisions are made without good diagnostic tools and under pressure from regulators and insurers to maintain high transplant-success rates.
When a kidney is removed, doctors often biopsy a slice and connect the organ to a pump that measures blood flow for signs of scarring and hardening of the vessels. When kidneys are discarded, hospitals cite biopsy results more than any other reason. Yet studies suggest that biopsies do not always do a good job of predicting how long a transplanted organ might survive.
"The hardest decision we make in deceased organ transplant is whether to accept a given organ for a given patient," said Dr. Gabriel M. Danovitch, medical director of the kidney transplant program at Ronald Reagan U.C.L.A. Medical Center. "It's all odds, based on information that is incomplete at best."
Another factor, doctors and organ procurement officials say, is federal scrutiny of transplant success rates.
In 2007, following revelations of lax government oversight of poorly performing transplant centers, the federal agency that manages Medicare, required that survival data for transplanted organs and recipients be made public. The figures are adjusted for relative risk factors and compared with expected survival rates.
The penalty for underperformance can be severe. If the number of failures exceeds expected levels by 50 percent, transplant programs are flagged, explained Thomas E. Hamilton, director of survey and certification for the federal Centers for Medicare and Medicaid Services. If it happens twice in 30 months, the program's administrators are given a brief probationary period to improve, or convince regulators that there were other factors. Otherwise, the program is decertified.
Because Medicare is the primary insurer for kidney transplants, such a ruling can effectively close a transplant program. Commercial insurers also use the survival ratings to make decisions on contracts.
Over five years, through June, 79 organ transplant programs had drawn oversight for repeatedly falling short and seven had been decertified, Mr. Hamilton said.
In interviews, dozens of transplant specialists said the threat of government penalties had made doctors far more selective about the organs and patients they accepted, leading to more discards.
"When you're looking at organs on the margins, if you've had a couple of bad outcomes recently you say, 'Well, why should I do this?' " said Dr. Lloyd E. Ratner, direct of renal and pancreatic transplantation at NewYork-Presbyterian/Columbia hospital. "You can always find a reason to turn organs down. It's this whole cascade that winds up with people being denied care or with reduced access to care."
Dr. Michael A. Rees, a transplant surgeon at the University of Toledo Medical Center, said his kidney program was cited by Medicare in 2008 after several unlikely failures. To save the program from decertification, he said he cut back to about 60 transplants a year from 100, becoming far choosier about the organs and recipients he accepted.
The one-year transplant survival rate rose to 96 percent from 88 percent, but Dr. Rees still bristles at the trade-off. "Which serves America better?" he asked. "A program doing 100 kidneys and 88 percent of them are working, or a program that does 60 kidneys and 59 of them are working? It's rationing health care under the guise of quality, and it's a tragedy that we are throwing away perfectly good organs."
Mr. Hamilton said the Medicare agency agreed that individual hospitals had grown more cautious, and appropriately so. But he said there was no evidence that had led to more discards nationally, as other hospitals had picked up the slack.
"There's something very negative about poor outcomes," Mr. Hamilton said. "And that's where we need to be putting our attention."
Other Approaches
The transplant community has grappled for years with the problem of viable kidneys being discarded. But the politics of rationing, where any reallocation creates high-stakes winners and losers, has thwarted all efforts at revision. Eight years after the United Network for Organ Sharing charged its kidney transplantation committee with improving the system, there has been no change.
One approach, outlined by the committee in February 2011, called for rating each organ based on the donor's age, height, weight and medical history. The top 20 percent of those kidneys would be allocated to candidates expected to live the longest. The rest would be matched to give priority to candidates within 15 years of the donor's age.
The proposal died quickly after federal officials warned that discrimination laws would prohibit the use of age to determine outright who gets a transplant.
There are no such obstacles in Europe. And in 1999, seven countries, including Germany, began matching kidneys from donors 65 and older to recipients in the same age bracket. Those kidneys were allocated close to home to shorten cold time, and biopsies were used sparingly.
The number of older kidney donors has more than tripled, and discard rates are less than a third of that in the United States, said Dr. Ulrich Frei, a German nephrologist who has compared the two systems. Studies have found no significant difference in survival rates for older patients in Europe and the United States, he said.
Dr. Frei said he found the discard rate in the United States "quite disturbing." The reliance on biopsy is misplaced, he said, and valuable hours are wasted in the sequential search for a taker for a lower-quality kidney. That they wind up discarded, he said, is "a self-fulfilling prophecy."
On Friday, the kidney committee plans to circulate a new proposal that would leave most of the system in place. As with the prior plan, the top 20 percent of kidneys would be matched to the candidates expected to survive the longest, placing older patients at a disadvantage. But the remaining 80 percent would still be allocated primarily by time spent on the wait list.
"It's a compromise," Dr. Ratner said. "I think it's going to make very little difference."
Dr. John J. Friedewald, the committee's chairman, said it was impossible to please everybody when allocating limited resources.
"We want to maintain equal access and do better with this pool of kidneys," he said. "But by changing allocation slightly and getting 10,000 more life-years lived, what is that worth? Is it worth slightly decreased rates of access for certain groups of people? That's what we go back and forth trying to decide."
In August, as the committee finalized its recommendation, a group of researchers proposed yet another allocation algorithm in the American Journal of Transplantation. It would give individuals in different age bands an equal chance to get a transplant in a given year. But it would drive the best kidneys to the youngest recipients.
A lengthy public comment period will follow Friday's release of the new proposal. The organ sharing network's board is not expected to vote on a plan until at least June, and possibly much later.