This is part two of a two-part post on the ethical and practical considerations of risk in AI research. Read part one here.
Worries about “bad” downstream interpretations or applications of research are not new. It has always been the case that someone could use research findings for bad ends, intentionally or not. And we’ve historically seemed to allow this risk, in the name of academic freedom—scientists should be able to ask the questions they want to ask—and on the assumption that peer or other scientific review will weed out useless or uninformative research and bad science. (Though a notable exception, and an interesting model, is the NIH’s oversight of Dual Use Research of Concern, or DURC). So why worry about downstream harms with AI research? Isn’t this just AI research exceptionalism?
I think AI research makes the concerns about downstream harms particularly salient, for two reasons. First, according to Dr. Narayanan in his presentation to SACHRP in July 2021, “AI is a methodology for taking training data from the past and creating a decision rule to use for the future. It is trying to make the future look just like the past.” If you train a tool on data that are collected in a world characterized by structural inequity and biases—in healthcare, access to technology, employment, housing, criminal justice, and many other systems—then the tool you emerge with will just reflect those inequities in its application. Second is the “black box” nature of many AI algorithms: humans are often not able to understand how a trained and validated algorithm arrives at the decisions or predictions that it does. This lack of explainability of the algorithm’s decision-making hides underlying causal relationships and makes it hard identify, correct, or fix anything that might be wrong or troubling in how the tool gets applied.
In my view, the salience of the potential for downstream harms in AI research—and the pointed questions it raises about how research applications might result in group harms, perpetuate bias, or be used in the service of oppressive or other problematic agendas—are rightly pressing on the adequacy of a research oversight framework that asks us not to consider the potential negative implications for society or specific populations. Maybe it is time to revisit such a framework. Stanford University has recently launched an Ethics and Society Review program, specifically for its AI research (modeled on the Microsoft Research Ethics Review Program), which requires researchers to consider and address the possible negative ethical and social implications of their AI research, as a condition of funding. Could such an approach be generalized to all research?
After all, as I’ve written and talked about elsewhere, the last few years have raised our collective awareness of the ways in which the research enterprise sits within a society characterized by structural injustice, and so of course reflects and inherits that injustice in all kinds of ways, from who gets a to shape the research agenda, to what research questions get funded; from who is equipped to manage the burdens of research participation, to who is positioned to actually enjoy the benefits of research. What if it were a required, common element of research design, or research review, to identify and address whether and how the research project might perpetuate or exacerbate social inequities or, instead, be a tool for addressing them? What if our research ethics framework expected and incentivized researchers to reasonably anticipate and articulate the downstream social implications of their work? Indeed, a recent Scientific American article responding to the fact that the shooter who massacred 10 Black people in a Buffalo grocery store in May appealed to recent genetics research to “justify” his racism, makes a strong case that “scientists need to consider their moral responsibilities as producers of this research.”
The research enterprise—in particular the large portion of it funded by taxpayers—rests on a social contract: research avails itself of finite public resources—funding, labor, bodies, good will and support—with the expectation that the work produced will be in the service of the public good. It seems to me that academic freedom of the sort we grant scientists as part of this contract is compatible with asking researchers to think carefully about, and take some responsibility for, the downstream uses or other consequences of their research. I don’t pretend to know how we get there—do we aim to change regulations and eliminate the section prohibiting IRBs to look at downstream risks? Do we want institutional, or centralized, bodies to conduct ethical and social reviews of AI research, along the Stanford Model? What do you think?
Elisa A. Hurley, PhD, is the Executive Director of PRIM&R.
I frequently wonder about the downstream risks of the rapid increase in the generation and sharing of research participants’ genetic sequencing data. The technology is evolving rapidly and it is difficult to predict how it will be used in the future, but also easy to imagine how things could go wrong…
The prohibition on IRBs considering downstream risks has always bothered me. From an ethical point of view, shouldn’t we be considering the effects of the work we are reviewing? Someone should be, and in most cases that can only be the investigators or the IRB. Even within the current regulations, we have the right and obligation to consider those effects in evaluating “the importance of the knowledge that may reasonably be expected to result” as part of our risk-benefit calculation. In some cases, that “importance” will be negative. Many IRB members would hesitate to approve a study if the scientific merits were dubious enough that the study was likely to produce inaccurate results that amount to a negative “contribution” to generalizable knowledge. If the foreseeable result of a study is that people will be harmed (denied benefits, incarcerated, stigmatized, discriminated against), shouldn’t that also be subtracted from the “benefits”? If we approve such studies, we share the responsibility for the results.
This isn’t a simple issue, and there is clearly a potential for IRBs to do harm. A lively discussion at a PRIM&R meeting a few years ago made clear to me that some IRB staff would regard research that could support the value of gun control as harmful. The potential for conflict, without any clear basis for resolution, is great. But that doesn’t mean that we can ignore the questions.