Artificial Intelligence (AI) Is Moving Fast – What Are the Ethical Implications for Scientific Research?

(Originally published: February 2023 PRIM&R Member Newsletter)

AI is everywhere, and is changing nearly every aspect of our society, including medicine and scientific research. The research community is developing tools, procedures, and methods to address opportunities and challenges of implementing AI and machine learning within scientific research.

AI has already been listed among Britannica’s “History of Technology Timeline,” sitting alongside major innovations like irrigation, sailing, gunpowder, printing, the telephone, and the internet. The scientific research community must, and is, moving quickly to consider the ramifications of the deployment of artificial intelligence in the research context, to ensure the spirit of the Belmont Report is honored.

“When we talk about AI research, we are mainly talking about research that seeks to develop tools that will replace human decision-making. The development of AI typically involves the collection and use of huge amounts of data to train an algorithm to make decisions or predictions within some domain,” said Elisa A. Hurley, PhD, Executive Director of PRIM&R.

The algorithm, she explains, is tested and validated based on the accuracy of decisions and predictions. “The goal,” Dr. Hurley said, “is to then apply the AI model to new data in the real world, such as allocating emergency room beds, diagnosing suicide risk based on social media posts, or managing workplace stress through remote sensing technologies, to take just three examples.”

‘Data Ethics’

Asked about the ethics of AI with respect to scientific and medical research, Jon Herington, PhD, Assistant Professor of Philosophy, and Health Humanities and Bioethics at the University of Rochester, focuses on the importance of “data ethics.”

“The foundation of accurate and fair machine learning (ML) algorithms is representative and responsible datasets,” Dr. Herington told PRIM&R. “If we want ML to contribute to a healthier and more equitable future, then it’s our duty as scientists to make sure we collect data in ways that are responsive to the diverse capacities, goals, and histories of the people within our communities.”

“Data ethics isn’t just about avoiding bias—it’s about ensuring that the communities we collect data with retain some sense of control over the process,” Dr. Herington said. “They need to see the research as legitimate and beneficial.”

“One of the best ways we can be responsible is by avoiding ‘health equity tourism,’ and [instead] viewing our research projects as long-term partnerships with our participants—that build their capacity alongside our knowledge,” said Dr. Herington, who presented a session titled, “The Ethical Imperative and Challenges of Working with Diverse Populations in Digital Research,” during a PRIM&R workshop on February 22, 2023.

‘Blurring of Lines’

“In today’s digital age, there is a blurring of lines between participating in research and going about everyday life, in that the data we generate in our daily activities could end up as research data, unbeknownst to us,” Dr. Hurley and PRIM&R’s Director of Public Policy, Sangeeta Panicker, PhD, co-wrote in article published by The National Council of University Research Administrators.

Researchers in a variety of fields, including the biomedical, behavioral, cognitive, educational, and social sciences, have leveraged digital technologies to recruit human participants, implement interventions, analyze data, and disseminate findings, a trend that has been amplified during the current COVID-19 pandemic.

The scale in which information from emerging digital technology can be collected and analyzed for research differ greatly from traditional, in-person laboratory experiments. That creates an opportunity—and crucial ethical considerations—for the use of artificial intelligence in research.

Algorithms and models are continuously evolving with the personal information that people are generating through their use of digital technologies. The changing landscape of how personal information is collected, analyzed, and shared raise serious questions about the suitability of the prevailing ethical framework for research with human participants.

“There is still considerable scientific and public confusion about the reality and potential of AI. But we know from experience that algorithmic-driven big-data informed decision-making has, to date, been notoriously fraught with ethical issues from justice to informational harms,” said Jonathan Beever, PhD, Associate Professor of Ethics and Digital Culture at the University of Central Florida (UCF) and Director and Co-Founder of UCF Center for Ethics.

“AI is more likely poised to exacerbate rather than resolve these issues. It seems prudent, therefore, to take a strong precautionary rather than proactionary stance—in particular when related to personal medical and genetic information,” Dr. Beever said.

Much AI research falls outside the human subjects research oversight framework for three main reasons, as outlined in a PRIM&R article, “AI and the ‘downstream’ risks of research” (7/14/22). First, the data involved are often collected, owned, and used by commercial entities, which are largely unregulated. Second, research depends on the collection of massive amounts of data from social media, apps, internet browsing histories, wearable devices, and electronic health records—although these are data from and about humans, much of it is either deidentified or already in the public domain, and, as a result, largely exempt from IRB review. Third, while there may be risks to people whose data are included in the large data sets used to train algorithms—reidentification being the most obvious—those risks are considered low.

PRIM&R’s Role in Exploring AI

As the development and deployment of AI continues to move quickly, PRIM&R is gathering thought leaders to consider what ethical AI in research looks like.

As part of that effort, in collaboration with Drexel University, PRIM&R held a workshop in February 2023 titled, “Impact of Ubiquitous Digital Technologies and Evolving Societal Norms on Research Ethics.” The workshop was funded by the National Science Foundation (NSF). PRIM&R will continue working with Drexel University and NSF to focus attention on this important issue in the months to come.

The guiding questions from PRIM&R’s February workshop included:

  • How has the ubiquitous use of algorithms—the use of AI—in everyday digital technologies, impacted ethical dimensions of human research?
  • What is the proper ethical framework for addressing uses of digital technologies when conducting research with human participants with a variety of technology literacy and privacy perceptions?

Workshop speakers included:

  • Mary Gray, PhD, a Senior Principal Researcher, Microsoft Research; Faculty Associate, Harvard University’s Berkman Klein Center for Internet and Society. Dr. Gray addressed why the proliferation of commercial AI demands action from the research community.
  • Desmond Patton, PhD, MSW, PRIM&R Board Member; Brian and Randi Schwartz University Professor, with joint appointments in the School of Social Policy & Practice and the Annenberg School for Communication along with a secondary appointment in the department of psychiatry in the Perelman School of Medicine. Dr. Patton discussed the promise and challenge of using AI for gun violence prevention.
  • Barbara Barry, PhD, Collaborative Scientist, Division of Health Care Delivery Research at Mayo Clinic; Dr. Barry is a human-computer interaction (HCI) researcher who studies how interaction with AI impacts human intelligence, communication, and behavior. Dr. Barry addressed ethical issues in AI-enabled clinical decision support systems.
  • Joshua August Skorburg, PhD, Assistant Professor of Philosophy, University of Guelph; Co-Academic Director, Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI). Dr. Skorburg addressed whether AI is compatible with participatory research.
  • Jonathan Beever, PhD, Associate Professor of Ethics and Digital Culture, University of Central Florida (UCF); Director and Co-Founder, UCF Center for Ethics. Dr. Beever focused on the ethical risks of bad actors in digital policymaking and the potential impacts on research.
  • Jonathan Herington, PhD, Assistant Professor of Philosophy, and Health Humanities and Bioethics, University of Rochester. Dr. Herrington outlined the ethical imperative and challenges of working with diverse populations in digital research.

Community Concerns About AI Ethics

By a heavy margin, people have expressed concern about the ethical implications of AI in research. Nearly nine out of 10 respondents to a PRIM&R LinkedIn poll on this issue indicated they had some level of concern about “the ethical implications of the use of artificial intelligence in medicine and scientific research.”

No alt text provided for this image

One of the fundamental questions worthy of consideration is: What role will IRBs play for the use of AI, and how will the use of AI change human subjects research?

Building on February 2023 workshop, PRIM&R will continue to play a leadership role to ensure the highest ethical standards in research, as AI and machine learning continue to be developed and deployed.

This article was originally published in the February 2023 PRIM&R Member Newsletter. Click here to become a PRIM&R member and join our supportive membership community that provides resources and connections with colleagues from more than 1,000 institutions in more than 40 countries.