Throughout the 2021 Advancing Ethical Research Conference, the three largest concerns around the use of artificial intelligence and machine learning (AI/ML) in Artificial Intelligence Human Subject Research (AIHSR) were explainability, identifiability, and algorithmic bias. PRIM&R brought Sara Jordan (one of my favorite experts in the field) and her team to explain explainability. As always, they did an amazing job, and I want to analyze their recommendations for explainability in the context of IRB review. I also propose avoiding outsourcing the review to ancillary or commercial IRBs if an institution already has a home IRB.
Why is Explainable Machine Learning/Explainable Artificial Intelligence (XML/XAI) important?
As Sara and her team described, Explainable AI is a machine learning technique that makes it possible for human users to understand, trust, and effectively manage AI. As such, it is imperative that IRBs understand the AI/ML’s explainability in order to fulfill their responsibility of adequately assessing the risk-benefit ratio in AIHSR. It is the IRB’s role and within their purview to factor in explainability into their review processes.
So how do IRBs know whether an XML/XAI tool accomplishes the regulatory and ethical objectives for adequate risk-benefit assessment? I propose three solutions:
- Education. The IRB board members collectively should be getting training on AIHSR. CITI Program has numerous options to choose from and some are free
- Include an AI and data expert on the review board
- Use AIHSR checklists in addition to the standard IRB reviewer checklists and embed an AI/ML section into the IRB protocol application. This will aid institutions in determining which studies are AIHSR and which studies simply use AI but are not AIHSR. Many institutions are already utilizing AI HSR checklists to accomplish these goals.
Ed. Note: Tamiko discussed AIHSR checklists and determining exemptions with AI research in her recent Community Conversation. The recording is available free to PRIM&R members.
Why not just outsource or make an ancillary committee?
The speakers described how IRBs can incorporate explainability tools into the review process. I completely agree. However, while it may be ideal for institutions with no IRB to outsource their reviews, for institutions with a home IRB, there are multiple downsides to outsourcing AIHSR oversight. Below are a few that come to mind:
- Cost: The study team may need to plan for additional funding if the review isn’t free (i.e., when it isn’t done in-house). Additional reviews for modifications or annual renewals may be required, which would add to that cost.
- Duplication of Effort: An AI Research Review Committee (AIRC) typically acts as an ancillary review to IRB review. However, many if not all of the issues reviewed would parallel IRB review and cause duplication of effort, time and money.
- No binding regulatory power: If an AIRC (or any AI ancillary review) has recommended changes to the protocol, the committee likely won’t have any regulatory “teeth”. This means that the researchers will not be required or inclined to comply with their “suggestions”. Additionally, these suggestions may or may not make their way to the IRB unless there is infrastructure established that keeps the two committees “talking to each other”.
How can the IRB incorporate XML/XAI considerations into their review process?
Focus on the data. Since explainability largely depends on the model, but more so depends on the data, the IRB’s focus should be weighted more heavily on the data used to train the model, as opposed to the algorithm/model itself. IRBs are more well suited to address data concerns than technology (though, the technology may require additional risk assessment by the IT department). Again, these issues can be addressed using a quality AIHSR checklist, adequate board member training, and adding an AI and data expert to the review board.
Tamiko T. Eto, MS, CIP, is the Division of Research Manager for Compliance, Privacy, and IRB Support Services at the Permanente Medical Group of Kaiser Permanente, Northern California. She has over 16 years of experience in human subjects research protections. Prior to that, Tamiko served as Acting Director at Stanford Research Institute’s (SRI) Office of Research Integrity and Chair of the SRI IRB. She leverages her experience to implement regulatory policies to health care research projects that delve into AI and novel technology research.
Tamiko works closely with AI researchers and institutional/regulatory bodies in addressing ethical and regulatory challenges related to AI, and has developed tools and checklists for IRBs to use in their review of AI research. She also actively collaborates on research, in an effort to be at the forefront of developing an ethical and regulatory framework for research involving human subjects.
No comments! Be the first commenter?