Taking seriously the “downstream” risks of research

This is part two of a two-part post on the ethical and practical considerations of risk in AI research. Read part one here.

Worries about “bad” downstream interpretations or applications of research are not new. It has always been the case that someone could use research findings for bad ends, intentionally or not. And we’ve historically seemed to allow this risk, in the name of academic freedom—scientists should be able to ask the questions they want to ask—and on the assumption that peer or other scientific review will weed out useless or uninformative research and bad science. (Though a notable exception, and an interesting model, is the NIH’s oversight of Dual Use Research of Concern, or DURC). So why worry about downstream harms with AI research? Isn’t this just AI research exceptionalism?

I think AI research makes the concerns about downstream harms particularly salient, for two reasons. First, according to Dr. Narayanan in his presentation to SACHRP in July 2021, “AI is a methodology for taking training data from the past and creating a decision rule to use for the future. It is trying to make the future look just like the past.” If you train a tool on data that are collected in a world characterized by structural inequity and biases—in healthcare, access to technology, employment, housing, criminal justice, and many other systems—then the tool you emerge with will just reflect those inequities in its application. Second is the “black box” nature of many AI algorithms: humans are often not able to understand how a trained and validated algorithm arrives at the decisions or predictions that it does. This lack of explainability of the algorithm’s decision-making hides underlying causal relationships and makes it hard identify, correct, or fix anything that might be wrong or troubling in how the tool gets applied.

In my view, the salience of the potential for downstream harms in AI research—and the pointed questions it raises about how research applications might result in group harms, perpetuate bias, or be used in the service of oppressive or other problematic agendas—are rightly pressing on the adequacy of a research oversight framework that asks us not to consider the potential negative implications for society or specific populations. Maybe it is time to revisit such a framework. Stanford University has recently launched an Ethics and Society Review program, specifically for its AI research (modeled on the Microsoft Research Ethics Review Program), which requires researchers to consider and address the possible negative ethical and social implications of their AI research, as a condition of funding. Could such an approach be generalized to all research?

After all, as I’ve written and talked about elsewhere, the last few years have raised our collective awareness of the ways in which the research enterprise sits within a society characterized by structural injustice, and so of course reflects and inherits that injustice in all kinds of ways, from who gets a to shape the research agenda, to what research questions get funded; from who is equipped to manage the burdens of research participation, to who is positioned to actually enjoy the benefits of research. What if it were a required, common element of research design, or research review, to identify and address whether and how the research project might perpetuate or exacerbate social inequities or, instead, be a tool for addressing them? What if our research ethics framework expected and incentivized researchers to reasonably anticipate and articulate the downstream social implications of their work? Indeed, a recent Scientific American article responding to the fact that the shooter who massacred 10 Black people in a Buffalo grocery store in May appealed to recent genetics research to “justify” his racism, makes a strong case that “scientists need to consider their moral responsibilities as producers of this research.”

The research enterprise—in particular the large portion of it funded by taxpayers—rests on a social contract: research avails itself of finite public resources—funding, labor, bodies, good will and support—with the expectation that the work produced will be in the service of the public good. It seems to me that academic freedom of the sort we grant scientists as part of this contract is compatible with asking researchers to think carefully about, and take some responsibility for, the downstream uses or other consequences of their research. I don’t pretend to know how we get there—do we aim to change regulations and eliminate the section prohibiting IRBs to look at downstream risks? Do we want institutional, or centralized, bodies to conduct ethical and social reviews of AI research, along the Stanford Model?  What do you think?

Elisa A. Hurley, PhDis the Executive Director of PRIM&R.