Skip to main content

Can review become more discriminating?

Dr. Robin Barr
Robin BARR,
Director, DEA,
Division of Extramural Activities (DEA)

Several recent commentaries (Danthi, Wu, Shi and Lauer; Lauer, Danthi, Kaltman, and Wu;) have found that the percentile rank an application receives in peer review has little or no noticeable relationship to how productive (in terms of citation impact of publications) a subsequent award is, should the application be so fortunate as to be awarded. So, a first-percentile application is apparently no more productive than a 15th-percentile application. (A larger scale analysis did find a stronger relationship, (Li and Agha), though the very broad range of years in this latter study—1980 to 2008—means that the different analyses were examining quite different data. Back in those halcyon days success rates were sometimes grand!)

If so many applications are strong…

Is that outcome really surprising? We should not necessarily expect a strong relationship between relative ranking by experts in review and subsequent productivity. Results run against predictions (more’s the fun, of course) and carefully planned experiments can crumble when an animal model does not deliver the intended effect. Still, almost no relationship these days?

Then there’s the “admissions officer” problem mentioned in an earlier blog. As funding lines tighten, review’s ability to discriminate among a large set of very strong applications becomes weaker. In essence, we fund a subset of very strong research grant awards. There is no meaningful discrimination among them. The problem is the possible corollary: We are also not funding a subset of very strong applications for which there is no meaningful discrimination that separates them from those that do receive support. A recent paper (Fang, Bowen and Casadevall) does find evidence that this problem is real in the NIH peer review system.

Can we better differentiate among proposals?

If we believe that we are now beginning to stretch the limit to which review may discriminate, can we do better? Can we provide more informative detail to review to allow reviewers to make finer discriminations among applications? One suggestion comes from the “short-term, high-priority” awards that I wrote about in the earlier blog. What if the majority of our awards were of this kind? The expectation is that the investigator(s) would return to peer review after funding for one or two years and only then seek the longer term support that we associate with an R01. We would issue a smaller set of these five-year R01s (and perhaps a tiny set of longer-than-five-year awards) than we currently do, although more people overall would receive awards. Reviewers of the longer term awards would have a more substantial body of research to consider than now—a little like review of competing renewals. Would that assistance allow review to discriminate more effectively? Would a larger set of shorter term awards and a smaller set of longer term awards provide a more advantageous mix for NIA than our current arrangement?

What do you think?

Now it is time for the disclaimer: I am floating an idea for your reactions. Really! You will not suddenly find that most of our funding opportunities offer only one or two years of support. What do you think? Would a one- or two-year award cause deans and department chairs excessive anxiety attacks that would seriously impair their relationships with you? (“When is the real money coming…?”) Would they instead be pleased that more faculty have some funds, even if it is not the holy grail of a five year R01? Could such a strategy lead to more productive research than we achieve with our current strategy?

I wrote the above words originally before NIA received—in December 2015—a $350 million increase for FY2016 targeted toward Alzheimer’s research. In essence, the additional bolus of funds shifts our funding line to a point where we can have more confidence that review can discriminate effectively—at least for applications focused on Alzheimer’s research. That still leaves the problem looming large for our general allocation—where our initial (and hopefully improving a little…) funding line is the 7th percentile for most applications.

Research Review