My numbers are better than your numbers

By: Wendy Tate, PSM, CIP

In my world, January is metric month.

I spent last month analyzing data from the 2010 calendar year, calculating statistics, and overlaying the information with our workflow. The result: glowing data that shows we have improved our process since last year (insert cheer here).

Inevitably, when I present this data, I will get asked the following question by my director, the VP for research, my staff, and researchers: “How do we compare to our peer institutions?” My response is always, “Great question.”

As I learned at the 2010 Advancing Ethical Research Conference, many institutions are also trying to determine how their metrics compare to those of their peers. While every human research protections program (HRPP) is interested in the same information, such as how long it takes to approve a project or how many submissions a program receives, each institution has its own way of calculating this information. It’s therefore difficult to make direct comparisons among institutions.

For instance, is the approval date the day that the IRB approved the protocol, or the date that all the conditions were met? Do you use calendar days or business days? Do you include all days, or do you exclude days that information was pending from the investigator?

I attended many quality assurance and metric sessions at the 2010 AER Conference, and met lots of people interested in quantitatively calculating success and identifying bottlenecks. During the conference, representatives from institutions that review their own research expressed frustration with metrics, as did larger organizations such as Accreditation of Human Research Protection Programs (AAHRPP) and Western IRB. It seems that everyone is waiting for nationally accepted standards, and instructions on how to calculate them.

Without robust quantitative information, it is difficult to do research on the HRPP review process. Do faster review times lead to greater IRB non-compliance with federal regulations? That is hard to answer without standardized data.

Accepted standards and calculation methods would be invaluable to institutions interested not only in comparing how they are doing in the field of regulatory process, but also in identifying colleagues to partner with, as well as best practices that they can apply to their own programs. I plan on scouring other institutions for their HRPP metrics and calling them to discuss their calculations. I will be looking at our data in comparison to the AAHRPP metrics released last year, and to the Western IRB information I received at AER.

I will be posting our metrics on my institution’s website, and am happy to discuss how I calculated these statistics with anyone interested (you can get my contact information here). Using the PRIM&R connection and Ampersand, we can develop these metrics that will only strengthen the scientific data supporting the regulatory review process.