TAG ARCHIVES FOR artificial intelligence


When we talk about AI research, we are mainly talking about research that seeks to develop tools that will replace human decision-making. The development of AI typically involves the collection and use of huge amounts of data to train an algorithm to make decisions or predictions within some domain. While there may be risks to people whose data are included in the large data sets used to train algorithms, the most salient and serious risk of harm in AI research is to those on whom the AI is applied in the real world. Read more


The three largest concerns around the ethics of AI/ML in artificial intelligence human subject research are explainability, identifiability, and algorithmic bias. It is imperative that IRBs understand the AI/ML’s explainability in order to fulfill their responsibility of adequately assessing the risk-benefit ratio in AIHSR. Read more


In the not-so-distant past, IRBs reviewing artificial intelligence and machine learning protocols were quick to give not-human subject research determinations because the application was presented as a software development project. More recently, many IRBs improperly issue exempt determinations because the application is presented as a secondary-use data project. Read more


There’s a growing trend in Social, Behavioral, and Education Research (SBER)–machine learning–in which investigators often request to obtain, through direct interaction and intervention, various sets of data on human subjects, including their physiological (i.e., data obtained from either invasive or non-invasive means) and/or biometric data (e.g., audio/visual recordings). The research as originally conceived may or may not have been considered human subjects research, but its ultimate purpose is to teach machines how to think, draw conclusions, and process information in much the same way humans do. Read more