Talk about a quick dividend! The dust hasn’t even settled on the travel bags that I carried to the 2015 SBER and AER Conferences, and yet I’m already applying the lessons learned to my responsibilities as IRB Chair at Providence College.
As you may recall from my previous Ampersand postings, my learning experiences at this year’s conferences focused on the difficult choices that are faced by researchers who are engaged in “big data” projects. These researchers find it impracticable to engage in routine informed consent practices, and thus they must seek waivers from their IRBs to forego such activities.
The fundamental lesson that I learned from the conferences is that, under such circumstances, debriefing activities represent critically important opportunities to provide for the needs of human subjects. Because Providence College employs few researchers who engage in “big data” projects, I didn’t expect to utilize this lesson any time soon.
But then, last week, our IRB received a pair of research applications. Were they “big data” projects? Well, no, not at all. They were anonymous survey studies that asked respondents to answer personal questions of a sensitive nature.
Had I not attended the SBER and AER conferences, I would have simply proceeded to an assessment of the following key IRB question: Despite the anonymity of the survey methodology, does the sensitivity of the questions impose a level of “more than minimal” risk on the respondents? Such an assessment, of course, would require a full review of the applications by the entire IRB.
With the fundamental lesson of the conferences in mind, though, I immediately reached out to the researcher and asked if he had considered the inclusion of any debriefing activities. He replied that he had not considered the employment of such activities, and he agreed to modify his applications to include them.
So … will his modification represent the key factor that changes our mind about the level of review of his applications? That’s impossible to know for sure; after all, numerous factors are considered during such deliberations.
Nevertheless, I might not have even considered raising the issue with the researcher, if not for my attendance at the conferences. And there is unanimous agreement, among all parties, that the inclusion of debriefing activities improves the quality of the study.
Ironically, our IRB is still waiting for its first “big data” research application of the academic year. In other words, the fundamental lesson that I learned at the conferences addressed a type of study that we have yet to review.
But that hasn’t stopped me from utilizing my knowledge in a manner that has improved my performance as IRB Chair. Although that utilization represents an unexpected payback for my attendance at the conferences, it indeed provides me with a real dividend for my time.
Michael Kraten, chair of the IRB at Providence College, is a member of the PRIM&R Blog Squad for the 2015 AER Conference. The PRIM&R Blog Squad is composed of PRIM&R members who blogged here, on Ampersand, to give our readers an inside peek of what happened at the conference in Boston, MA.
Michael:
Thank you for describing how you so quickly ended up taking information from classroom to practice. If I may, one question: in implementing the debriefing technique, how do we guard against it being used when it would just be “easier to beg forgiveness than ask permission?” I agree that there are situations in which talking with subject-participants ahead of time would compromise the research, or times when scale and/or the avenue of engagement (e.g. social media data) make consent unfeasible. But I do worry that “debriefing” offers too ready an avenue—too slippery a slope—to researcher laxity: (said in best childlike whiny-plea-to-parents voice) “but, IRB, getting consent is sooo haaaarrrd; giving a debriefing is so much easier…” Or something like that.
I will stress that I don’t see that at work in your example, in which you helped bring an extra level of ethical engagement with subject-participants to the researcher’s attention. And maybe I’m being overly cynical (or despairing), but somehow, I don’t really think so.
Any thoughts?
Sincerely,
DAJ
First and foremost, DAJ, thanks very much for your thoughtful question. The answer is actually contained in the federal government’s OHRP decision charts, which are available here:
http://www.hhs.gov/ohrp/policy/checklists/decisioncharts.html
They’re certainly not an easy read, but if (for instance) you scroll down to Charts 10 and 11, you’ll see two entire decision charts that address your question.
Among other factors, the practicality of obtaining informed consent, the effect of doing so on the participants’ rights and welfare, and the possibility of a breach of participant confidentiality are key considerations.
Is your question still, fundamentally, a judgment call? Yes, of course it is. Nevertheless, the pair of decision charts provide helpful guidance for researchers and IRB members