by Amy Davis, JD, MPH
Overheard, a new series here on Ampersand, is a glimpse into conversations, discussion, and debate on PRIM&R’s IRB Forum. No participants or institutions are identified to preserve the mission and openness of the Forum, though IRB Forum members are welcome and encouraged to comment on these posts. To suggest a topic for Overheard, please contact us. If you are not a member of the IRB Forum and would like to join, you can do so here.
On the IRB Forum, people are talking about studies using deception. The exchange offers a glimpse into the significant ethical concerns researchers must consider in using deception in research. Whether you believe that the use of deception is irreconcilable with the principles of respect for persons and autonomy and therefore inherently unethical, or that there are circumstances in which deception is justified, the issue is surfacing more often as researchers find new ways to use the internet and use crowdsourcing to conduct public health research. A recent and highly publicized example of this phenomenon is the Facebook mood experiment.
What is the harm of telling subjects “little white lies” to ensure unbiased data or to encourage participation? From an ethics perspective lies undermine informed consent, rendering the research unethical. From a risk/benefit perspective, depending on the lie, the deception itself could cause psychological harm to the subject. (See Stanford Prisoner Experiment.) Subjects who do not have full knowledge of the purpose or potential risks of the study are not fully informed. How can an IRB approve such a study?
There must be some way to reconcile deception in research with informed consent and the fundamental principle of respect for autonomy or else it wouldn’t be so widely used. The centerpiece of research ethics standards, The Belmont Report, is, as is so often the case, a useful place to launch an inquiry of this kind. While The Belmont Report states that informed consent must meet “the reasonable volunteer” standard (“knowing that the procedure is neither necessary for their care nor perhaps fully understood, [the volunteer] can decide whether they wish to participate in the furthering of knowledge”], it allows for the possibility that there may be circumstances that justify the withholding of certain information when the information is likely to “impair the validity of the research.”
In all cases of research involving incomplete disclosure, such research is justified only if it is clear that (1) incomplete disclosure is truly necessary to accomplish the goals of the research, (2) there are no undisclosed risks to subjects that are more than minimal, and (3) there is an adequate plan for debriefing subjects, when appropriate, and for dissemination of research results to them. Information about risks should never be withheld for the purpose of eliciting the cooperation of subjects, and truthful answers should always be given to direct questions about the research. Care should be taken to distinguish cases in which disclosure would destroy or invalidate the research from cases in which disclosure would simply inconvenience the investigator.
The standards referenced in the IRB Forum discussion, those of the American Psychology Association’s (APA) Ethics Code, echo these criteria and state:
8.07 Deception in Research
(a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational or applied value and that effective nondeceptive alternative procedures are not feasible. [Emphasis added.]
(b) Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.
(c) Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. [Emphasis added.]
Surprisingly, the US regulations are not so explicit and never mention “deception.” Researchers using deception rely on the exception established within the informed consent rules that allow IRBs to approve research that “does not include, or which alters, some or all of the elements of informed consent” so long as…
“(1) The research involves no more than minimal risk to the subjects;
(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
(3) The research could not practicably be carried out without the waiver or alteration; and
(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.” [Emphasis added.] 45 CFR 46.116(d)
We can summarize these three sets of criteria and recommend the following criteria for an IRB to consider before approving research involving deception:
- Deception is necessary to accomplish the goals of the study.
- Deception is justified by the study’s significant prospective scientific, educational or applied value.
- There are no undisclosed risks to subjects that are more than minimal.
- Deception is not used to elicit cooperation of subjects.
- There is an adequate plan for debriefing subjects as early as feasible.
One concept that may be useful for bridging the ethical gap between informed consent and deception in research is “authorized deception” where subjects consent to be deceived (Wendler, D.; Miller, F. G. “Deception in Research,” The Oxford Textbook of Clinical Research Ethics, pgs. 320-321.). Placebo controlled trials are often designed in this way. The subjects are informed that there is a 50/50 chance that they will receive either the active drug or the placebo, but will not be informed of which they receive. To further protect the autonomy of the subjects under this approach, researchers and the IRB should determine that the concealed/falsified information does not constitute information that might reasonably impact the subjects’ willingness to participate. The challenge for IRBs and researchers, then, is to anticipate what information might reasonably have such an impact, particularly since the potential impact for one participant is not necessarily the same for another. Consider, for example, a bystander study where the hired actor pretends to have been assaulted. An uninformed bystander who suffers from post-traumatic stress disorder as a result of an assault may have a very different response towards participating in such a study than a person who does not suffer from this condition. It is incumbent upon the IRB, therefore, to understand the potential vulnerabilities of potential subjects to assess the potential risks. Not a straightforward undertaking.
One theme that crops up in the literature on deception is the uncertainty about the harms that may result from the deception itself, and the extent to which the debriefing process minimizes such harms. What is the harm of having been deceived, or having not been fully autonomous in deciding to participate in a research study? According to Wendler and Miller, little research has been conducted on the impact of deception on subjects. It makes sense that authorized deception and debriefing following participation demonstrates greater respect for persons than non-authorized deception without debriefing, but can those practices restore autonomy? What about offering to withdraw data provided by deceived participants? Arguably that restores some measure of control to the subject but renders the participation meaningless which carries its own negative impact. Wendler and Miller recommend that more research be conducted on the impact of deception on research subjects. It is only when we have solid research on the potential risk of using deception in research that we can enhance protections for subjects who encounter it.
Given the numerous ethical pitfalls of using deception in research, you might be surprised to learn that approximately half of all psychology research involves deception (Korn, J.H. Illusions of Reality: A History of Deception in Social Psychology, 23-24). It is not going away, so IRBs should be prepared to manage these issues by developing clear policies and procedures on when deception may be justified, the requirements for informed consent, and the debriefing process. If you have such policies already, and are willing to share them, either as guidance for our readers, or to seek input on their development, we invite you to post your links in the comments below.
Resources
- Wendler, David; Miller, Franklin G. “Deception in Research” The Oxford Textbook of Clinical Research Ethics, edited by Emanuel, Ezekiel J.; Grady, Christine C.; Crouch, Robert A.; Lie, Reidar K.; Miller, Franklin G.; Wendler, David D.. Oxford University Press, New York. (2008) pgs. 320-321.
- Korn James H. Illusions of Reality: A History of Deception in Social Psychology, SUNY Press, Albany. (1997). pgs. 23-24.
- The PRIM&R Knowledge Center
- The Declaration of Helsinki
- The Belmont Report
- Code of Federal Regulations, Protection of Human Subjects, 45 CFR 46
URL for University of Mississippi's IRB policy on deception in human subjects research:
https://secure4.olemiss.edu/umpolicyopen/ShowDetails.jsp?istatPara=1&policyObjidPara=10879821
Send personal replies to pytwl@olemiss.edu
Thanks so much for your contribution, Tom.