The Approach criterion: why does it matter so much in peer review?
While preparing for a recent talk, I took a close look at our data on the scoring of grant applications. Every applicant wants great scores, and we want to help you understand how you’ll be scored, and why. For example, you may have heard that the Approach criterion score is highly correlated with the final impact score assigned to a grant application. Let’s get into the details of that.
Reviewers use 5 criteria to assess research grant applications, then discuss a final impact score.
As most applicants for NIH grants know, reviewers assess research grant applications using five criteria:
Reviewers give each criterion a separate score. Then these are considered together (along with additional criteria such as human research protections) when the overall impact rating or impact score is given.
How is a criterion score different from an impact score?
Criterion scores are given independently by each reviewer before the review meeting. An impact score is given after discussion when all the reviewers have had a chance to hear each other’s point of view. (About half of applications are not discussed at the review meeting. These not-discussed applications receive only criterion scores. They don’t get final impact scores.)
The Approach criterion score is highly correlated with impact score.
With that separation between criterion score and impact score, why does the average of the reviewers’ ratings on Approach correlate so highly with the impact score? In the several analyses (NIGMS, RockTalk) that have been conducted, the correlation usually hovers around .8. Significance and Innovation also figure in, but have lower correlations than Approach. Investigators and Environment lag a long way behind the other three criteria. Why does Approach matter so much?
What are reviewers really evaluating in the Approach criterion score?
Usually, when this tight relationship between the Approach criterion score and the impact score is mentioned someone deplores the result and so seems to demean review. These critics say that reviewers focus on the lowest common denominator in an application. They say that picking on the methods and details ensures that methodologically strong but questionably significant applications rise to the top.
In preparing for my talk (to Pepper Center junior investigators), I looked over the kind of criticisms that reviewers stated when giving poor scores on Approach. Methodological criticisms do occur, but far more commonly the criticisms were of the conceptual approach to the science. So I saw comments like these:
- This is an old conceptual approach. So-and-so has conceived the problem differently, and that sheds new light.
- The conceptual model is too simple. Other elements need to be added before the model will be effective in moving the field forward.
It was easy to see that these kinds of criticisms were driving down the Significance rating for the affected applications, too.
Now, I will say that, when looking at better scoring applications, methodological points were more dominant in the Approach score. Taken together, it begins to look as if the high overall correlation for Approach is driven partly by a substantive consideration (problems in the conceptual approach) and partly by methodological concerns. Thinking that through, I do think these data confirm my opinion of reviewers as constructive and thoughtful as well as assiduous and careful! The high correlation of Approach to impact score perhaps reflects how these attributes play out across different applications.
Do you agree? Or, do you have other questions about criterion scores? Let me know by submitting a comment below.