By Lori Melichar, PhD, director, Pioneer Portfolio at Robert Wood Johnson Foundation
The systems that drive institutional review boards (IRBs) was last modified nearly a quarter century ago. That overhaul was before the internet and smartphones, and before the human genome was completed. Recently, new technologies such as ResearchKit, have emerged as game changers for patient engagement in research studies. Other studies that apply social network science, big data analysis, and personalized medical insight raise important new questions about how patient rights can and should be protected in this new age.
Maybe it's time to redesign the IRB.
That's exactly what my colleague Deborah Bae and I set out to do—explore what a redesigned IRB might look like. We gathered a group of people who care a lot about research—and a lot about research subjects. We applied design thinking to the IRB process to come up with new ways to help human research subjects understand their privacy rights and to protect them in the age of personal and big data.
It was a potent gathering. Professor Scott Klemmer facilitated the design session that included those with expertise in many fields: public health, ethics, psychology, economics, mHealth, quality improvement, medicine, (big breath), bioethics, privacy law, information science, qualitative data analysis, big data, social networks, and genetics. Also included were professionals with experience managing IRBs that oversee human subjects research.
Here are three ideas the group generated—tell us what you think of them:
Idea 1: Redesign the Consent Form and Process
How do we ensure human subjects truly understand the risks and opportunities associated with their participation in a research study—regardless of race, culture, or cognitive ability?
This group looked to Creative Commons licenses as a model of clarity in creating three consent forms: one that contains all the legalese and scientific exposition; one in plain English; and a third that presents risks in bullet points.
To make the process clearer for underrepresented populations, the group suggested identifying a community representative, such as a promotora in a Latino community, to help design the consent form and facilitate its use in ways that address community-specific risks the researcher might not anticipate. The promotora would help communicate these risks in a way that resonates with the community.
Idea 2: Empower Researchers to Protect Their Subjects
What if there are no requirements to acquire approval from IRBs? What would alternate systems look like?
This group developed the idea that certified researchers intending to engage in human subjects research would produce a document that laid out plans and risks. They would then offer those documents, along with consent forms, for review and approval by peers who are accredited with a new certification: human subjects protection. The responsibility for ethical conduct during the study would be shared by researchers and the peers who agreed that the plan would protect the rights and privacy of human research subjects.
To make it easier to create high quality plans, this group proposed that researchers consult an online resource, similar to Stack Overflow, a resource that coders use. On this online resource, researchers could pose questions such as, "How do I ensure that I won't cause harm by asking this question?" As with Stack Overflow, researchers would receive answers from researchers with experience in human subjects protection within a few hours. Elements of the plans would be like modules that could be swapped in and out, drawing attention to highly ranked modules. This system could be coupled with a system that punishes offenders (more on this below).
Idea 3: Learn from Successes and Mistakes
What are the incentives—and disincentives—that could be put in place to ensure we create a research community that values learning from successes as well as mistakes?
This idea is based on a procedure in place to ensure safety in the airline industry. Pilots who have a "bad" landing, who report this mistake and propose a way to learn from this mistake, are not penalized professionally. But those who don't report a landing mistake will be severely penalized if someone else reports it.
Analogously, in this design, researchers who create a protocol they believe to be safe, who then observe a harm during the research, and who report that harm to a governing or oversight body, present an opportunity for the system to learn how to prevent future such harm. This expectation would be reinforced because if the harm were to be reported by anyone else, including research staff or the subject, the researcher would be severely penalized.
What do you think? Could any of these ideas lead to human subjects protections that could ensure future generations contribute to science in a way that enhances our health and safety without jeopardizing theirs?
I hope that you'll discuss these ideas with your colleagues in the lunchroom and on social and professional networks. If you have other thoughts or concerns, cautions or ideas about how to improve the IRB system, I'd welcome them as well. Feel free to share in the comments below, or contact me on Twitter.
Lori Melichar, a labor economist, is a director at the Robert Wood Johnson Foundation where she focuses on discovering, exploring and learning from cutting-edge ideas with the potential to help create a Culture of Health. Follow @lorimelichar on Twitter.