In the not-so-distant past, IRBs reviewing artificial intelligence and machine learning (AI/ML) protocols were quick to give not-human subject research determinations because the application was presented as a software development project. More recently, many IRBs have improperly issued exempt determinations because the application is presented as a secondary-use data project.
However, with an influx of healthcare-related AI/ML research project submissions, we are quickly learning more about the technology and accompanying problems, such as identifiability and algorithmic bias, just to name a few. With this awareness, we now understand that artificial intelligence and machine learning in healthcare can have significant and harmful consequences to the health and safety of humans and society. Moreover, we have come to understand, per the FDA (2019) that “AI/ML-based software, when intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, are medical devices under the FD&C Act” and classified as “Software as a Medical Device” (SaMD), and therefore FDA-regulated. When it comes to protecting humans in research, we can no longer avoid asking that glaring question: do additional oversight requirements apply to AI/ML projects?
Every year, the FDA attends the PRIM&R annual conference, and I was thrilled to see them in the lineup of the 2021 Advancing Ethical Research Conference sessions focusing on big data, privacy, and novel technology. The session “When Is an Investigational Device Exemption (IDE) Needed for Medical Device Clinical Investigations?” in particular drew my attention because it was titled with such a simple question that has, for years, never had an easy answer. While the presentation was not solely focused on artificial intelligence and machine learning, if there was ever a time to get all our questions answered on how the latest novel technology (including AI/ML) in human subject research applied to the FDA regulations, it was now!
The FDA has, for a long time, provided clear guidance that spells out the processes and requirements for human subject research that involves the use of drugs and devices. More recently, it has published several guidances on the requirements and recommendations for AI/ML in SaMD and if/when an investigational device exemption is needed. However, as helpful as the published FDA guidance is, I find I can learn so much more from speaking directly to representatives and getting real-time answers. Thanks to technology, the chat, and Q&A functions at PRIM&R also allowed for multiple users (including myself) to post all their burning questions and get their answers.
This year, extremely knowledgeable FDA representatives (a shout out to the amazing Ouided Rouabhi!) presented sessions and attended FDA office hours. I took advantage of attending both. In taking what was presented, I found that the guidance on determining if an artificial intelligence/machine learning project is FDA-regulated or not, is consistent with my understanding. In addition to just asking FDA directly, there’s also a pretty simple formula to start with:
- Run through the “How to Determine if Your Product is a Medical Device” algorithm (no pun intended). In other words, check if the 812 regulations apply (even if it’s just data). This means we need to answer two questions:
- is it a clinical investigation (812.3)? and
- does it meet the FDA definition of a medical device?
- If the answer to both questions is “yes,” the Sponsor/PI/IRB will need to make a risk determination
This means Significant Risk (SR) projects need an IDE and work with the FDA, while Non-Significant Risk (NSR) projects can use their IRB as their FDA surrogate, which will hold them accountable for adhering to the abbreviated 812 regs.
Of course, if it’s still confusing, rather than guess or hope that it isn’t regulated, just reach out to the Digital Health Center for Excellence website or email DeviceDetermination@fda.hhs.gov.
Tamiko T. Eto, MS, CIP, is the Division of Research Manager for Compliance, Privacy, and IRB Support Services at the Permanente Medical Group of Kaiser Permanente, Northern California. She has over 16 years of experience in human subjects research protections. Prior to that, Tamiko served as Acting Director at Stanford Research Institute’s (SRI) Office of Research Integrity and Chair of the SRI IRB. She leverages her experience to implement regulatory policies to health care research projects that delve into AI and novel technology research.
Tamiko works closely with AI researchers and institutional/regulatory bodies in addressing ethical and regulatory challenges related to AI, and has developed tools and checklists for IRBs to use in their review of AI research. She also actively collaborates on research, in an effort to be at the forefront of developing an ethical and regulatory framework for research involving human subjects.
Tamiko has created an Artificial Intelligence Human Subjects Research IRB Reviewer Checklist and Exempt Determinations Decision Tree that can help guide IRBs in reviewing AI research in both medical and non-medical scenarios. As part of Member Appreciation Month, she’ll host a Community Conversation on May 26 where she’ll walk through the checklist and facilitate discussion for PRIM&R members. Registration opens next week!
Have more questions on when an investigational device exemption would be necessary? Join us on April 26 for our upcoming webinar Un-Common Rules: Navigating FDA-Regulated Research and the IRB.
One THOUGHTS ON “Establishing a Clear Pathway for IRB Review of Artificial Intelligence/Machine Learning in AI Human Subject Research That Involves Software as a Medical Device”