Posted by guest blogger Eunice Newbert, Manager, Education and Quality Improvement Program at Children’s Hospital Boston
After conducting study reviews of ongoing clinical trials for almost six years as part of the Education and Quality Improvement Program (EQuIP) at Children’s Hospital Boston, my favorite part is, of all things, witnessing the evolution of investigators. However, it is not in terms of improvement quantitatively gauged through metrics, but rather in their notable change of perspective and understanding, indicative in their increased willingness to learn and improve.
As more research institutions implement quality improvement (QI) efforts to improve the protection of human subjects participating in clinical research, I am happy to have a growing number of peers with whom to discuss and debate an equally growing number of QI-related topics and issues. From defining non-compliance and developing monitoring/auditing tools, to setting benchmarks and deciding what to report, to whom and how, and to selecting studies for review, tracking trends, and working with IRBs…and the list goes on. But the one thing often missing from these discussions is why a specific investigator deviates from the approved research study.
To put it simply, my job is to identify any deviations, non-compliance or areas for improvement, and then to develop corresponding corrective actions and education. Ultimately this information will be quantified, tracked, and used to develop education and resources for continual improvement at an institutional level. But to make this whole process effective, I believe the QI reviewer must take the time to understand the context in which each deviation occurred when first observed and then to tailor comments accordingly for each investigator.
When QI reviewers take the time to understand the context in which a deviation occurred, it allows us to translate the regulations—asking how and why they apply to that specific issue—and then tailor corrective actions to the investigator. From personal experience, this fosters a willingness from the investigator to make corrective actions beyond the study documentation and reporting. With a better understanding of how and why specific regulations apply, I find investigators are more willing to make changes to their overall conduct, which, in the end, increases the chances for continual and ongoing improvement after our review is over.
Of course, I will always use my monitoring tools to track measurable benchmarks to help determine what we, as an institution, can do to improve the compliance of our research community. However, during the final meeting of a study review, it is when I personally see a new willingness from the investigator to learn and improve, that I feel confident I have done my job well.
The comments and opinions in this post reflect the opinion of the author, and do not necessarily represent the opinion of PRIM&R or its Board of Directors.