Earlier this spring, the Maine Regulatory Training and Ethics Center (MeRTEC) launched the COVID-19 Learning IRB, a free, nonprofit “learning” IRB to review COVID-19 related research. To learn more about this initiative, we interviewed Stephen Rosenfeld, MD, a partner in the initiative and a PRIM&R board member. The interview has been edited for length and clarity.
You can learn more about its mission and services or ask questions of organization representatives on the MeRTEC COVID-19 Learning IRB website.
Tim Badmington (PRIM&R)
What is a learning IRB? What specifically do you hope to learn?
How do we know an IRB makes good decisions? We know because its members have appropriate credentials and satisfy the regulatory requirements. That's essentially all we use. And we attest that if we have the right, well-credentialed people appointed, and we follow compliant processes, the decisions will be good. Implicitly, our decisions are a question of judgment, and none are “wrong,” even if there may be agreements and disagreements. Within the current system, that’s as far as we can go.
So are we making good decisions? Maybe. But we have no data on that. We're supposed to be part of a scientific enterprise, but we're very happy operating with no data at all. And we never assess our decisions or our membership against the requirement to “promote respect for (its) advice and counsel,” (45 CFR § 690.107) at least among the people for whom they're supposed to matter. The purpose, after all, is to protect human subjects; they're the ones who should decide whether IRB decisions are worthy of respect or not. That’s strongly implied by the regulations when they talk about issues such as community attitudes, cultural backgrounds, and sensitivities.
So that's the background. We know there's variations between IRB decisions, less in approval and disapproval than in conditions placed on research. We have no idea how widespread or justified that is. IRBs have routinely been criticized by the research community for being arbitrary; variability may be appropriate and fact-based, but unless you explain it and analyze it, you can't distinguish it from arbitrariness. It will appear arbitrary unless you bother to explain it. So that's the real problem. How do we move beyond that?
The only way to know whether you're making good decisions, and the only way to make better decisions, is to learn from the decisions you've made. Almost by design, our system is set up so that we never learn anything formally. To be sure, there's a lot of tacit learning that goes on in IRBs, but it remains within the personal and institutional memory of that group of people.
There are two dimensions to learning or applying what we learn. The first is that an IRB has to have the ability, when conducting its reviews, to quickly and easily see what it's done before in similar circumstances. Whether it's the same protocol or a similar protocol, whether it happened last year or five years ago, you need to be able to see it. And if you're making a different determination this time, you need to make that explicit and explain why.
The other dimension is learning across the IRB community. There's no reason we shouldn't all be sharing these decisions, which are in fact about societal good. And which are often about very difficult questions! The whole idea of doing research on people raises fundamental ethical questions. By its nature, you're treating people as objects. We may have found acceptable ethical solutions for particular research methodologies, but technology and research change, and the ethical challenges are always evolving. We look to the Belmont principles, which provide a good framework, but we need to figure out how to apply them in new circumstances. When a novel ethical situation comes up, it’s typically debated in the literature by bioethicists, who often take both sides and provide us with a framework for consideration. But in the meantime, IRBs are faced with making concrete determinations that require ethical interpretation, and they do this over and over again. And yet the system is set up so we never learn from these decisions!
We're not always going to agree with the decisions another IRB made, but that will serve as a starting point for discussion. And our decisions eventually have to be transparent and have to be subject to the ethical judgment of society at large. Because that’s what promotes respect. That's what promotes trust in research, which is really what the IRB system is about.
Why is that important? While the immediate impact of our decisions may be on individual study designs, the decisions are supposed to be about protecting human subjects. And the standards for that protection are not specific to an individual study or established by an individual researcher or even by the research community. Those standards should be established by broader society. We have to judge our decisions against the needs of those we are supposed to serve. That means we have to articulate decisions so that the people we serve can see them and understand them.
Tim Badmington (PRIM&R)
There is a difficulty in quantifying something that is maybe fundamentally unquantifiable, which is ethics. So how do you measure success? How do you measure the success of this project, or the success of learning IRBs as a concept?
One way that question has been asked is: how do we know we're protecting participants? Can we measure the decrease in harms? Focusing on risk of harm is a very biomedical perspective, and even with that limited perspective there are real practical problems with assessing such a reduction. And there are a lot of other things that happen, particularly these days, that impinge on rights and welfare, which are just not captured when we limit our assessment to physical harms. Ethics is not just about hurting people physically—It's about trespassing on their autonomy. It's about taking away their rights. It's about not giving them opportunities. And those are not inherently quantifiable things because they really do depend on perspectives of the people who are affected.
So I think the answer is that, because it's hard, we haven't even really tried to connect this back to research participants or to society. Research participants are an important perspective to consider, and particularly relevant to the deliberations of the IRB, but everybody is a potential research participant and ethics apply across society. We're trying to apply societal ethics to research; we shouldn’t be creating our own enclosed ethical system. We have to connect it back to society, and I don't know how to do that—that's why it's the learning IRB! The goal has to be to connect what IRBs do back to the people that they're trying to serve and let those people be the judge. Right now, they're completely cut out, which doesn't make any sense.
You say a pretty interesting thing about a vertical and horizontal approach to addressing issues of justice in research. Can you expand on that?
I think of vertical and horizontal dimensions as applying to consideration of research programs. Seeing the vertical dimension is knowing the history of a study and how it was reviewed or run in your own organization. Seeing the horizontal dimension is seeing how that study fits in the overall research enterprise—it’s about uninformative trials and questions that have already been answered. These dimensions are important to our usual considerations of risk and benefit, but they are also important when considering justice. Justice is not simply a matter of “equitable inclusion” for a single study, but about the research program. Diversity, equity, and inclusion (DEI) are goals for the research enterprise—they are not always coherent as goals for an individual, isolated study as typically considered by an IRB.
There are other dimensions of DEI that apply in the context of research. The one that usually comes up in the IRB context is representation and scientific validity. You can't conclude a medicine is safe or an intervention is safe or a policy is appropriate if you haven't examined it in the context of the people who will be affected. Not knowing about safety or effectiveness means that the risks faced by understudied populations will be higher, or they may not even have access to the intervention if it is approved. But I think that considerations of scientific validity are only one aspect of justice.
Most research starts out as publicly funded. Even if the research is done by industry, it's often built on basic science research that was done at the NIH or at a university. So we're all paying for it. And science and public health are social projects. To disenfranchise people from participating is to meaningfully disenfranchise them from an important part of society. That can hurt us all. Lack of participation, even if groups weren’t allowed to participate, can be used to justify restricting the benefits of research. And groups that were cut out will feel that the research enterprise doesn’t serve them. I think in order to keep research healthy and alive and supported by the public, we need to give everybody the opportunity to contribute. This is beyond the need for scientific validity.
Tim Badmington (PRIM&R)
So with respect to the still-new MeRTEC COVID-19 Learning IRB project, you've been open to protocol submissions for a little while now, right? What have you received so far?
We’ve received quite a few submissions for expedited or exempt social-behavioral research. In this project, like so many other things, the pandemic has played out differently that we thought it would. We limited our scope to COVID because we thought it was responsive to the moment, and we wanted to contribute to the pandemic response by providing free reviews for people who wanted to do COVID research.
I think COVID was such a stressor to the system that people put up all sorts of local solutions. And we underestimated the practical ability or demand for anyone to do research outside their existing institutional structures. In the beginning, it seemed possible that small hospitals would want to do small scale research on COVID, but the landscape turned out to be so complicated, and frankly, political and commercial, that I’m not sure COVID research should have been our sole focus.
That said, there is a lot of social and behavioral research to be done on the impact of COVID, using surveys and other observational methods. And those researchers typically don't have a lot of funding to support IRB review. So what we've seen so far is along those lines, but the studies are small and typically have come from within our local system. Having a better understanding of the COVID research landscape, I think MeRTEC is likely to have more of an impact on social and behavioral research project than on biomedical research on COVID.
But that’s a lesson learned. How could you think of putting together an IRB for anything but COVID a year ago?
Tim Badmington (PRIM&R)
What are you hoping to achieve with nonprofit status?
Commercial IRBs are always an alternative for researchers who don’t have an IRB or are looking for a review outside their own institution. We are not interested in competing with those IRBs, but we want to provide an alternative to people who have concerns about structural conflict of interest. As you know, I was part of the commercial IRB community for many years, and I know that they can do good and compliant reviews. But separating “board and business” is a continual challenge. So basically, we want to demonstrate that an alternative is feasible, and can do good work. And I want a place where our decisions about quality and our decisions about process are made based first on the best interests of participants.
Stephen Rosenfeld, MD, is on PRIM&R's Board of Directors. You can read his bio on our website.