Cross-posted on Open Mike.
As discussed in the blog, “Update on Simplifying Review Criteria: A Request for Information (RFI),” NIH issued an RFI—from December 8, 2022, through March 10, 2023—seeking feedback on its proposed plan to revise and simplify the framework for the first level of the peer review of research project grant (RPG) applications.
NIH received more than 800 responses to the RFI: 780 from individuals, 30 from scientific societies, and 30 from academic institutions. The vast majority were supportive of the proposed changes, although a minority were in favor of Factor 3 (Investigator, Environment) being scored, and a smaller minority advocated for a blinded or partially blinded review process. Most of the respondents highlighted the need for strong training resources for reviewers, study sections chairs, and scientific review officers.
One question that often arises is how investigator and institution will be weighted in arriving at the Overall Impact score, if they themselves are not individually scored. Since 2009, when scoring was added to the review criteria, reviewers have been free to weight these as they see fit in the Overall Impact score. This score has never been an average of criterion scores. This is no different in the setting of the simplified framework.
Although fully blinded review may be conceptually favorable, NIH is required by statute to assess investigator and environment. Thus, at best, only a multi-staged, partial blinding process would be possible. However, as the Nakamura et. al publication showed (eLife 10:e71368, 2021), anonymization of research proposals is difficult to achieve with those familiar with a given field, with about 20% of the reviewers correctly identifying the principal investigator despite extensive redaction. In addition, while NIH is conducting a partially blinded, three-stage review process for its Transformative Research Awards, which receives fewer than 200 applications per year, attempting to scale the process up to the more than 80,000 applications is not feasible. Piloting the changes would involve designing a multi-year study, since the NIH cannot “carve out” a subset of applications submitted to the agency for potential funding and review them using a different set of criteria.
A trans-NIH committee has been established for implementation of the changes for simplifying review criteria. This committee is developing a timeline as well as designing rollout and trainings. The evaluation of these changes—the effects of which would be evident only over several years—will include surveys and data analysis. For the simplified review criteria framework changes, we hope to see a broader range of institution types across the scoring ranges and an increase in the diversity of the pool of R01 applicants, as well as more career stages and PI funding levels represented (meaning those with no grants or only 1 other grant).
If we do see improvements, however, it will be important to place them in the context of all the actions that NIH’s Center for Scientific Review (CSR) is taking to improve peer review, which also include diversifying our review committees, deploying bias awareness and mitigation and review integrity trainings, and establishing a direct channel for the extramural community to report instances of bias in peer review. These actions are, of course, in conjunction with NIH’s overall efforts to break down structural barriers and advance equity in all aspects of NIH’s activities, particularly by way of its UNITE initiative, which has recently reached its second anniversary.
We thank all who took the time to work with us in this effort to simplify the review criteria framework for RPGs and provide feedback through the RFI and in other ways. We also thank those involved in the other aspects of improving peer review at NIH, which is an ongoing process as more data are generated and analyzed, new questions are asked, and fresh insights are established and shared. The engagement of our community partners is critical to the success of this continued endeavor. We believe these current changes will go a long way in helping us to better identify the science with the greatest potential impact.
I understand the benefits of anonymous reviews, but I think they would be more harmful than helpful. Track records are a good predictor of future success. In my experience, young investigators without a track record are given the benefit of a doubt based on their training. As others have said, I think the anonymous process will remove so much information such as other sources of funding etcetera And what they accomplishments of that funding produced. It does sound impossible.
anonymous reviews are impossible. Self citation cannot be helped, especially in renewals. The strength of an application is often assessed based on the track record (especially for the more senior applicants).