The more we know, the more we know how little we know…

by Andrea Johnson, JD, CIP, regulatory specialist in the Research Integrity Office at Oregon Health and Science University 

John P. A. Ioannidis, MD, PhD, the 2012 Advancing Ethical Research Conference keynote speaker on Tuesday, raised provocative questions about the information we obtain through scientific research. Among his many achievements, Dr. Ioannidis is well known for a 2005 paper published in PLOS Medicine titled, “Why Most Published Research Findings are False.” In his presentation today, he elaborated on this proposition, describing compelling evidence that much of the scientific literature that we have come to know and rely upon is infiltrated with numerous sources of bias.

At first, I reacted with alarm. If everything is biased, or even “false,” how can we trust that we know anything? Is this research, for the sake of which we expose so many human subjects to varying degrees of risk, really worth it?

Aside from being intriguing philosophical considerations, these questions carry practical implications for institutional review board (IRB) review of research. An IRB that assesses the risks and benefits of research must consider the likely value of the knowledge to be gained through the research. The IRBs that I have worked with have considered the quality of a study’s scientific design, looking out for obvious signs of bias or lack of control of variables, in appraising this likely value. However, IRB members have a limited amount of expertise on the research topics, and the board often relies on the information provided by the investigator regarding scientific background and justification for the study. Dr. Ioannidis made me wonder whether this system could ever be enough, not only because the investigator is likely to view the literature with his or her own bias, but also because the literature itself cannot be trusted.

But let’s take a reality check. Stepping back and looking at the big picture, it seems pretty indisputable that research has moved science and medicine forward. Furthermore, it is not practical for an IRB to conduct its own independent literature analysis for every project it reviews. Nor is that practice expected by our federal regulators, as written in OHRP’s Guidance on IRB Continuing Review of Research (November 2010), which states, “[N]ote that OHRP does not expect the IRB to perform an independent review of the relevant scientific literature related to a particular research project undergoing continuing review; this responsibility rests with the investigators and any monitoring entity for the research.”

As with many challenges in the protection of human research subjects, there is a balance to be struck here. To OHRP’s point, an IRB can review a study’s monitoring plan to ensure that it promotes more objective consideration of relevant literature. Additionally, Dr. Ioannidis advocated that both positive and negative research results should be published and accessible to the public.

These are both good ways to help mitigate the problem, but the session still left me feeling a bit skeptical and unsettled. I suppose that could be Dr. Ioannidis’s goal—cultivate a healthy dose of skepticism and discomfort that will keep us on our toes and continuously searching for ways to improve the review process.