The SUPPORT study and the future of Comparative Effectiveness Research

by Elisa Hurley, PRIM&R Education Director, and Avery Avrakotos, PRIM&R Education and Policy Coordinator

In a letter dated June 4, 2013, the Department of Health and Human Services (DHHS) Office for Human Research Protections (OHRP) took the unprecedented step of suspending all compliance actions against the University of Alabama at Birmingham (UAB) relating to its participation in the Surfactant, Positive Pressure, and Oxygenation Randomized Trial (SUPPORT) study.

Let’s review the events leading up to this development.

The SUPPORT study, which ran from 2005 to 2009, was a multi-site, randomized clinical trial investigating, in part, optimal oxygen-saturation levels for severely premature infants. When the study was conceived in 2003, the American Academy of Pediatrics recommended that premature infants receive supplemental oxygen levels anywhere between 85% and 95%, with the exact percentage left to the discretion of the treating physician.

What remained unknown, and what the SUPPORT study was designed to determine, was exactly how much oxygen within that range to provide to extremely low birth weight infants in order to minimize retinopathy (a long-known risk of the prolonged use of supplemental oxygen, which can cause blindness) without increasing the likelihood of other serious outcomes, such as brain damage or death. In 2004, 1,316 premature neonates aged 24-27 weeks were recruited at 23 study sites and were randomized into one of two groups: 85-89% oxygen saturation (low range) or 91-95% oxygen saturation (high range). Both ranges were within the standard of care provided at the participating institutions. The hypothesis being tested was whether, relative to infants managed with the high range of oxygen, the low range would result in an increase in survival without the occurrence of retinopathy. Thus, the two measured outcomes were retinopathy and mortality.

The study was approved by institutional review boards (IRBs) at each participating study site. Written informed consent was obtained from the parents of each neonate in the study. However, on March 7, 2013, after two years of investigation, OHRP sent a letter to the lead study site at UAB detailing its finding that “the conduct of this study was in violation of the regulatory requirements for informed consent, stemming from the failure to describe the reasonably foreseeable risks of blindness, neurological damage and death.” More specifically, the letter stated that the consent forms did not contain information in the “possible risks” section about risk of death or the risk of retinopathy. One month later, Public Citizen, a consumer advocacy organization, brought OHRP’s determination, and the study, to the attention of the media with an open letter to DHHS Secretary Kathleen Sebelius supporting OHRP’s finding and further condemning the SUPPORT study design itself as unethical.

That letter set off two months of heated public debate touching on a number of central issues in research ethics: the notion of clinical equipoise and ethical study design; the proper understanding of standard of care and when research presents novel risk; how to evaluate whether the balance between the benefits of research and the risks to vulnerable subjects is reasonable; and what comprises appropriate informed consent in randomized studies of standard of care. Researchers, clinicians, and bioethicists have come out on either side of the conflict, some in support of OHRP or Public Citizen’s analysis that the informed consent form, or the study design itself, were unethical or even inexcusable; some critical of these positions and in support of the study design and the investigators carrying it out.

The implications of the SUPPORT trial, the subsequent disagreement within the research community, and, ultimately, the unprecedented move of OHRP point to growing pains around a dynamic shift in the way clinical research is being conceived and conducted across healthcare settings. Traditionally, randomized clinical trials have been a tool to discover new treatments for a given ailment. But in the face of a proliferation of treatment options, healthcare consumers, practitioners, and sponsors are seeking systematic ways to gain more information about the best and most cost-effective treatments currently available for particular conditions. This trend, and a recent call from the Institute of Medicine (IOM) to develop learning healthcare systems that encompass much of what we currently think of as clinical practice, suggest that comparative effectiveness research (CER) is gaining traction.

CER has the potential to dramatically improve health outcomes while reducing costs. This type of research can be fraught with complications, however, as it raises questions about, or even upsets, the boundaries between research and clinical practice. This is arguably the case with the SUPPORT Trial: as OHRP wrote in its most recent letter, “Ultimately, the issues in this case come down to a fundamental difference between the obligations of clinicians and those of researchers.”

If the healthcare community is to continue to undertake efforts to examine existing treatments in a systematic fashion, one thing is certain—clearer guidance is needed. OHRP acknowledged as much in its June 4 letter, admitting that “there is justification for an incomplete understanding of how [the rules around disclosing risk] might apply” to studies such as SUPPORT. It goes on to “recognize OHRP’s obligation to provide clear guidance on what the rules are with regard to disclosure of risks in randomized studies whose treatments fall within the range of standard of care,” and promises greater than usual public participation in the development of such guidelines.

This latest letter will not end the disagreements around the ethics of the SUPPORT study. But if it injects new urgency and energy into the discussions around the need for safe and respectful comparative effectiveness research, then surely everybody wins.