Review Matters

Announcing a Simplified Review Framework for NIH Research Project Grant Applications

Authors

October 19, 2023

Cross-posted on Open Mike.

As we have discussed in previous blogs, NIH has heard concerns from the extramural community about the complexity of the peer review process for research project grants (RPGs) and the increasing responsibilities of peer reviewers in policy compliance. NIH has also heard concerns about the potential for reputational bias to affect peer review outcomes. After careful input gathering, development, and discussion, NIH is pleased to announce that a Simplified Review Framework will be implemented for grant receipt deadlines of January 25, 2025 and beyond.

The simplified framework is expected to better focus peer reviewers on the key questions needed to assess the scientific and technical merit of proposed research projects: “Can and should the proposed research project be conducted?” To achieve this, the five current review criteria (defined as Significance, Innovation, Approach, Investigator, and Environment; derived from NIH peer review regulations 42 C.F.R. Part 52h.8) are being reorganized into three broader factors to help reviewers focus on crucial questions that determine scientific merit. Reviewers will consider all three factors in determining the overall impact score, which reflects their overall assessment of the likely impact of the proposed research.

  • Factor 1: Importance of the Research (Significance and Innovation), factor score 1-9
  • Factor 2: Rigor and Feasibility (Approach), factor score 1-9,
  • Factor 3: Expertise and Resources (Investigator and Environment), either rated as sufficient for the proposed research or not (in which case reviewers must provide an explanation)

A significant concern being addressed in the simplified framework is the potential for general scientific reputation to have an undue influence on application review. By changing the evaluation of Investigator and Environment to a binary decision of sufficient or not in the context of the proposed research, Factor 3 aims to help mitigate this potential biasing influence.

Another concern addressed in the new framework is the reliance on peer reviewers to assess policy compliance. Relying on peer reviewers for these tasks has the potential to distract them from their chief goal of assessing the scientific and technical merit of an application. To reduce reviewer burden, NIH staff will assume administrative responsibilities related to the Additional Review Considerations of Applications from Foreign Organizations, Select Agents, and Resource Sharing Plans.

The results of a request for information underscored the need for resources and guidance for investigators, reviewers, and NIH staff. We therefore developed a new Simplified Review Framework webpage, which is now live and will serve as a central repository of information on this initiative. NIH is developing an integrated set of training events and resources to communicate the changes to applicants, reviewers, and NIH staff that will be rolled out over the next year. This support will begin with a webinar on November 3, 2023 to provide the public with an overview of the new framework and what to expect over the upcoming year in preparation for implementation. You can learn more and register for the webinar here.

We expect the Simplified Review Framework to have minimal impact on how applications are written. The intent is simply to focus reviewers on the fundamental questions that we have always asked them to address in reviewing grant applications for their scientific and technical merit, while minimizing the impact of reputational bias. We will keep you updated through our webpage, notices in the NIH Guide, and through the Review Matters and Open Mike blogs. We look forward to working together with the community to continue to improve peer review.

31 Comments on "Announcing a Simplified Review Framework for NIH Research Project Grant Applications"

  1. anonymous says:

    I was hoping that NIH would also add a “health equity” requirement to review criteria, at least for human subjects research. In this section, investigators should discuss how their project will or will not be generalizable to various populations (by gender, race, etc) and the ways it can or cannot advance scientific progress in an equitable way. It’s time we only consider science as “progress” when it benefits all populations, and not designed in ways that leaves some populations behind.

  2. Mark Sherman says:

    Recommend that institutions simply post a description of the facilities on a central site, which can be consulted by reviewers when needed. It is a waste to have investigators include pages of boilerplate information in every application. This section should address “special features of the facilities and environment” when relevant, limited to at most a single page.

    A second suggestion is to develop a process in which NIH staff in the extramural divisions reviews applications. This could be started as a pilot (initially, without formal inclusion in scoring) and modified to reach a workable solution (with later inclusion in scoring). The expertise of the extramural staff is not sufficiently incorporated into the review, and in reality, this would provide a truly unbiased review from individuals who know the field.

    Finally, has NIH developed methods, based on machine learning or other approaches, to score the value in previously funded research? What we need most are objective ways of predicting the value that grants will yield and using the wealth of past data would be the place to start.

    Thank you.

  3. Lei Jin says:

    Support NIH’s effort to reduce the undue influence of reputational bias, positive or negative, on peer review outcomes. NIH funding is an investment. “Past performance is not indicative of future performance” appears in every investment prospectus mandated by the SEC. In addition, with this Simplified Review, I hope NIH can require/mandate evidence-based (with citation) comments by reviewers. Too often, the reviewers’ comments were opinionated.

  4. anonymous says:

    While simplifying the review framework is always good, I am afraid it’s rather delusional to think that reputational bias will be reduced with these changes. Only way to get rid of it is to review anonymous proposals and shift the task of evaluating feasibility to NIH itself. Feasibility evaluation isn’t such an onerous task, after all.

  5. David Wood says:

    I think this is a step in the right direction, as I have been on too many panels where high scores were based on the fame of the applicant, or the applicants postdoc advisor (even in cases where the proposals had significant scientific flaws). I am hopeful that this approach will decrease that tendency. I believe that an equally important problem, however, is the opposite: that many proposals are not taken seriously because of the lack of fame of the applicant. Badly written reviews are the result, in same cases indicating that the proposal was never read beyond the specific aims, or even the abstract. I am therefore hopeful that panel chairs will use this review change as an opportunity to hold reviewers accountable for the scores and critiques that they provide.

  6. Byron C. Jones says:

    My experience with peer review of my work over the past few years has been disappointing. I am finding that the reviews are uneven and one reviewer can tank the entire proposal. Also often, the reviewer may not read the proposal carefully, make negative stuff up and mark the proposal down based on the false criticism. Hopefully, the new system will help remedy the situation.

  7. Irving Weinberg says:

    I like it. I have seen many bad proposals where the review committee says “he knows what he is doing” and gives the proposal a good score anyway. The reality is that young investigators and industry scientists are responsible for more innovation than senior academic scientists.

  8. Mesut Sahin says:

    I like the new format. It can really reduce the review time and the burden on the reviewers especially because the Additional Review Criteria is removed from the review.
    THis way there will be more time to write detailed comments on the most critical aspects of the proposal.

  9. Maria Hatzoglou says:

    The change is very promising for the future reviews. However, an additional change should be considered. The bullet points are not always effective to bring out the significance and potential issues with the approach (factors 1 and 2). I suggest to include instead of bullet point, itemized questions. It requires some thinking on what questions to include. For example, in factor 1, potential questions can be: What is the gap in knowledge that the proposal is addressing? Why filling the gap is timely and necessary? and more of this type of questions. I believe the itemized questions approach will decrease the bias for star scientists who submit not strong proposals

  10. anonymous says:

    This is a significant step by NIH, and it helps reduce the reviewer’s burden.
    I also have another suggestion for consideration: would it be possible to conduct a prereview of the application, for example, one specific aims page by a larger number of group (>3), which would include clinical and basic researchers and potentially layperson weighing in the importance/significance of the proposed project? These groups of individuals could be selected in a randomized manner and potentially blinded. In the second phase, highly relevant applications could be invited for a full proposal. It is possible to work on a research approach; however, you can not correct the original idea and its significance in advancing the field and solving diseases.

  11. Sara McBride-Gagyi says:

    Will the reviewers be blinded in anyway to who the submissions are from? I like the intent here and feel the adequate/inadequate Investigators/Environment will still assure that funding is going to people with the expertise and additional resources to do each meritorious project well. I recently participated on a review panel with a similar structure. AFter identifying good science we were unblinded to the investigators for those and asked to determine if we thought they could do the projects. However, I feel like reviewers may still be biased to give high scores on the other two sections if they definitely know who the investigators are. I personally have seen that happen (recently) when serving on review panels for other governmental organizations where the investigators were known to the reviewers.

  12. Appu Rathinavelu says:

    Thank you for the efforts to make the review process fair and for supporting the projects based on Significance and Innovation. Also, bringing term limits to the study section members will minimize the opportunity for Monopoly. New Investigators and Investigators who are trying to get their foot into the NIH funding system should be given a fair chance to succeed.

    • CSR Admin says:

      Thank you for your comment. Members of study sections serve a finite term of 4 years (attending 3 meetings per year) or, rarely, 6 years (attending 2 meetings per year).

  13. David Vilkomerson says:

    Excellent changes!
    Perhaps one more small change in addition. In my experience, very few applicants bother to submit proposals that would rate below a 5 in the new categories. Therefore, by my experience the scoring in the new categories would be typically between 2 (Super!) and 4 (meh); with paylines in the vicinity of 3 it makes a significant difference if one of the category scores is a 3.8 or a 3.2. (This has always been a problem, but it is exacerbated in having only two numerical ratings.)
    So a small tweak: allow two significant digits, i.e. a decimal point and a number. (Calculators can handle two significant digits….)

  14. anonymous says:

    The discounting of investigator expertise is dangerous and damaging to the entire scientific enterprise. A track record of excellent, innovative and reproducible science is the best predictor of future success. To focus entirely on approach/innovation/importance will reward investigators most skilled in the art of grant writing (or who have access to professional staff). As a member of an under-represented class, I also fear we will be disproportionately harmed.

    • Donald Gilbert MD MS says:

      Regarding the concern: “The discounting of investigator expertise is dangerous and damaging to the entire scientific enterprise. A track record of excellent, innovative and reproducible science is the best predictor of future success. ” It seems to me that “Feasibility” will include an assessment of, “Is this project feasible in these investigators’ hands?”. How do I know I can trust the PI/team? They have a track record that makes it feasible for them to accomplish this project.

  15. Huntington Pottet says:

    These are excellent changes that will better address the needs for peer review and reflect the approach that experienced reviewers take now to the current framework to reach an impact score.

  16. Ken Kellar says:

    This looks like it will be an important improvement. Thanks.

  17. Paul Bray says:

    Will there still be an overall Impact Score? If yes, how does it relate to the 3 new categories?

    • CSR Admin says:

      Yes, reviewers will enter an overall impact score. The overall impact score should reflect the reviewer’s assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the three factors and the additional review criteria. Reviewers will be free to weight the three factors as they see fit in arriving at the overall impact score.

  18. anonymous says:

    This is certainly a right step forward! Appreciate the effort NIH is investing to improve the peer review process.

  19. Michael L Hecht says:

    I really like the way the new criteria explain how “innovation” should be evaluated. Previously, the emphasis was also on something “new” and “different” rather than effective. So, a web-based or app intervention was not innovative but AI or virtual reality, regardless of how practical it would be to take it to scale, was innovative.

  20. Yehenew M. Agazie says:

    Recognition of bias an important step by NIH. I would like to add additional points that I have observed in my more than 10 years of review for NIH.
    1. Acquaintance bias
    2. Expert composition of a study section

  21. elizabeth bonney says:

    the review process should include a statement of diversity equity, inclusion- who is doing the project- and how this is an opportunity to build an independent and diverse workforce in science

  22. Undiscosed says:

    Just remove the Expertise and Resources from the review process. Blind the reviewers. Let the applicants institution to certify the Expertise and Resources and a random reviewer to evaluate the expertise or the facility. Unless you blind the reviewers we will NEVER have a fair process.

  23. Hank Seifert says:

    In my opinion, the PIs productivity in past work (stage appropriate), particularly in renewals, was not factored appropriately in the past review criteria and has no place within the new criteria. Why scientific reputation is not considered an appropriate factor escapes me and does not enhance the reliability of peer review.

  24. Alfred S. Lewin says:

    Focusing reviews on the three factors mentioned will provide better feedback for applicants and allow reviewers to avoid dealing with issues that are better handled by program officers.

  25. Rocio Rivera says:

    Excellent move! I believe that this will lead to better use of reviewer’s time when reviewing proposals.

  26. Robert Caudle says:

    Changing the evaluation of expertise to a binary scale will not reduce reputational bias. Individuals recognized as the leaders in the field will still get a bump in their scores because of their reputations even if the research proposed is less than stellar. The only way to avoid this bias is blinded reviews.

  27. Geo says:

    I understand that there is a concern that established investigators with an excellent track record and stellar reputation might receive unduly high scores for proposals that are not sufficiently strong. However, a problem with this new Simplified Review Criteria approach might be that it does not address how a reviewer should score a proposal submitted by an investigator with a Bad reputation. How will the new review process address this situation?

  28. Sumant S. Chugh says:

    I think this is a good idea. Not all accomplished scientists write excellent grants beyond the peak of their career, but still get credit for the past. CSR review should be based on the present and future potential, not on past accomplishments. On the flip side, some newer investigators have amazing ideas but not much institutional recognition or track record. This would put them at par with more established investigators by assessing item 3 as “sufficient” (if applicable)

Comments are closed.