Review Matters

Update on Improving Fellowship Review: A Request for Information

Authors

April 25, 2023

NIH is recommending changes to the peer review of Ruth L. Kirschstein National Research Service Award (NRSA) fellowship applications by restructuring the review criteria and modifying some sections of the PHS Fellowship Supplemental Form that are specific to NRSAs. The goal of this effort is to facilitate the mission of NRSA fellowship peer review – to identify the most promising trainees and the excellent, individualized training programs that will help them become the outstanding scientists of the next generation. The proposed changes will 1) allow peer reviewers to better evaluate the applicant’s potential and the quality of the scientific training plan without undue influence of the sponsor’s or institution’s reputation; and 2) ensure that the information provided in the application is aligned with the restructured criteria and targeted to the fellowship candidate’s specific training needs. The RFI requests public input on this proposal. To comment, go to the RFI, which contains additional background and links to submit your thoughts. Most of that background follows:

The first stage of NIH peer review serves to provide expert advice to NIH on assessment of the likelihood that the fellowship will enhance the candidate’s potential for, and commitment to, a productive independent scientific research career in a health-related field. The criteria for the review of NRSA fellowship applications derive from the NRSA regulation 42 CFR 66.106, with four pertinent factors: (1) the scientific, technical, or educational merit of the particular proposal; (2) the availability of resources and facilities to carry it out; (3) the qualifications and experience of the applicant; and (4) the need for personnel in the subject area of the proposed research or training. NIH currently organizes these criteria into the following categorical labels: Applicant, Sponsors and Collaborators, Research Training Plan, Training Potential, and Institutional Environment and Commitment. By NIH policy, peer reviewers are also required to evaluate Training in the Responsible Conduct of Research, Biohazards, Resubmissions, Foreign Organizations, Select Agents, Resource Sharing Plans, Budget and Period of Support, and Authentication of Key Biological and/or Chemical Resources.

NIH gathered input from many sources in forming this proposal. Unsolicited comments over a period of years, reflecting persistent concerns that the NRSA fellowship review process disadvantages some highly-qualified, promising applicants led the Center for Scientific Review (CSR) to form a working group to the CSR Advisory Council. To inform that group, CSR published a Review Matters blog which was cross-posted on the Office of Extramural Research blog, Open Mike. The blog received more than 1,500 views by unique individuals and numerous comments. The working group presented interim report to the CSR Advisory Council, which adopted the recommendations, at public CSR Advisory Council meetings (March 2022 video, slides; September 2022 video, slides). Final recommendations from the CSR Advisory Council (report) were considered by the CSR Director, as well as major NIH extramural program, review, and policy committees that included leadership from across NIH. Recommendations were presented to the NIH Advisory Committee to the Director in December 2022 (video, slides).

Improving NRSA Fellowship Review

Recommendation 1: Revise the Criteria Used to Evaluate NRSA Fellowship Applications

An Overall Impact Score (scored 1-9) will reflect the scientific and educational merit of the proposal and an assessment of the likelihood that the fellowship will enhance the applicant’s potential for, and commitment to, an independent, productive research career in a health-related field. Reviewers will take into account their assessments of the three criteria in determining an Overall Impact Score. Each of the three criteria will receive an individual score. The “additional review criteria” below are unchanged, will not receive individual scores, but will be considered in arriving at the Overall Impact Score. Two “additional review considerations”, also unchanged, will be evaluated but have no effect on the Overall Impact Score. Review the full text of the proposed changes.

Review Criteria

I. Scientific Potential, Fellowship Goals, and Preparedness of the Applicant a. criterion emphasizes their accomplishments in the context of their stage of training and the scientific opportunities they have had, as well as other factors that bear on their potential to succeed, such as determination, persistence, and creativity. (scored 1-9)

II. Science and Scientific Resources a. criterion emphasizes the extent to which needed technical, scientific, and clinical resources are specified and are realistically available to the applicant, and whether the scientific expertise of the mentorship team is appropriate for the proposed science and that the role of each mentor is clearly defined. (scored 1-9)

III. Training Plan and Training Resources a. criteria emphasizes whether the necessary training resources are well-specified and available, specifically the practical availability of resources, and to include an evaluation of the training philosophy of the sponsor, their approach to training, time commitments, and their accessibility. (scored 1-9)

Additional Review Criteria (not scored, but affecting Overall Impact; no changes proposed, see current language):

  • Protections for Human Subjects
  • Inclusion of Women, Minorities, and Individuals Across the Lifespan
  • Vertebrate Animals
  • Biohazards
  • Resubmission

Each of the Additional Criteria except the last will be rated as “Appropriate”, with no comments required, or as “Concerns”, which must be briefly justified. Resubmission will be given brief written evaluations.

Additional Review Considerations (not scored and having no effect on Overall Impact) no changes proposed, see current language):

  • Training in the Responsible Conduct of Research
  • Authentication of Key Biological and/or Chemical Resources
  • Budget and Period of Support
  • Applications from Foreign Organizations
  • Select Agents
  • Resource Sharing Plans

Recommendation 2: Revise the PHS Fellowship Supplemental Form

Changes to the application instructions are needed to better align the information applicants provide with the revised review criteria. The PHS Fellowship Supplemental Form currently includes the following four sections: Fellowship Applicant; Research Training Plan; Sponsor(s), Collaborator(s), and Consultant(s); Institutional Environment and Commitment to Training. The NIH proposes to revise the Fellowship Applicant section and the Sponsor(s), Collaborator(s) and Consultant(s) section. NIH also proposes to change the instruction for Letters of Reference. An additional proposed change would allow an optional Statement of Special Circumstances from the fellowship applicant. Review the full text of the proposed changes.

Through the RFI, NIH continues to seek public comment on the proposed changes before moving forward with implementation. The RFI will be open for a 60-day period, until June 23, 2023. We look forward to your comments.

30 Comments on "Update on Improving Fellowship Review: A Request for Information"

  1. Michael Gold says:

    I don’t know what happened to my first comment – but looking at this again, it still strikes me that what is missing from these revised criteria is the science. And not, as Thomas Schwartz says, how exciting the science is – but whether there is a testable hypothesis, well designed experiments that not only test the hypothesis, but will be interpretable even if the results are not consistent with the hypothesis. None of this is mentioned in points emphasized under each criterion. With an apparent shift in emphasis on the training environment and resources, these new criteria seem to be solidifying what is already a pervasive bias in the review process toward well endowed institutions.

  2. Nigel Cairns says:

    I support the proposed proposals and I endorse both Recommendations 1 and 2.

  3. Yasuhiro Suzuki, Ph.D. says:

    I agree that this is a right direction to improve the review process.

  4. lucio comai says:

    I have been involved in the review of NRSA fellowship applications for several years. I strongly support the revised review criteria. They fit much better with the purpose of the fellowship.

  5. Paul Macey says:

    I think the suggested changes are incrementally positive, and will nudge reviewers to better balance the different components of the NRSA when considering potential trainees.

    My general suggestion or maybe more of a wish is to simplify the application. At present, the application is so extensive that it mostly limits applicants to people in the usual leading institutions with strong mentors and research networks, and we are probably missing outstanding trainees in other places. Furthermore, I have found that many people considering NRSA’s don’t understand how they are reviewed in practice, which again advantages institutions and mentors who are already well established, and disadvantages young researchers who happen to be in less established environments. More specifically, in my experience reviewers are looking for more substantial than contained research and mentoring plans, so applicants who are unaware of this – because of their mentor and institutional knowledge – are at a disadvantage. I’m writing as someone who has had mentees get an NRSA, with all the research-1 institutional advantages.

    The guide and bullet points are very helpful. It would also help to frequently remind reviewers of these criteria, so they are not reviewing every research plan like an R01, for example.

  6. anonymous says:

    As a reviewer and as a sponsor I have seen many examples where it was considered a weakness that the applicant was attending graduate school at the same institution that they received their undergraduate degree. I feel strongly that this is a discriminatory practice and reviewers should be instructed not to consider this as a weakness. There are many reasons why an individual may have no option but to attend graduate school at the same institution that they received their undergraduate degree. Many of those reasons are socio-economic and therefore considering this a “weakness” disadvantages many minorities and other disadvantaged groups.

  7. Katherine A Sward PhD RN says:

    These recommendations seem reasonable and would be manageable for reviewers.

  8. Andrew Hollenbach, Ph,D. says:

    This is very interesting and I think overall is a good change. I’ve been reviewing and/or teaching how to write these applications for years and it will be a huge shift in how both are done.

    Is there any information or idea of if these changes have been approved and if they are when they would be implemented?

    Thank you!

    • CSR Admin says:

      We’ll need to see the results of the RFI first. That said, NIH sought a great deal of input in forming the initial recommendations. The RFI closes in late June and NIH is likely to share results of the RFI and next steps/decisions within a couple months after that. Stay tuned!

  9. Anonymous says:

    I do think we need to help reviewers to not be biased based on an institution’s reputation or endowments. National training opportunities and mentorship connections are more widely accessible since COVID and resources at a specific institution may be not well known to all reviewers. Unfortunately, in a recent career development award study section, a reviewer working at East Coast academic setting, questioned whether an applicant from an R1 midwestern academic setting would have the resources they would need. We need to ensure that scientific training occurs across the nation and this may also increase the advancement of individuals from disadvantaged backgrounds.

  10. Robby Nadler says:

    Potentially, the changes to the score structure would be a good thing (though it’s one of those things where I still fear people can just score as they choose). The other recommended changes seem like an out-of-the-frying-pan-and-into-the-pot sort of thing. What they really need to do is make applicants (and sponsors) write less–not write different things. Compared to the GRFP or NDSEG, F Awards are still a monster application whose scale itself is a barrier to application–and biases toward elite institutions. If GRFP and NDSEG can get away with 5 pages of total writing, surely NSRAs can find a way to get below 30?

  11. Beth Pruitt says:

    the first change sounds positive but will need discussion, rubrics, retraining to get reviewers to score differently.

    the 2nd change sounds like more work for the trainees, more writing and more to review. we should be streamlining what they write, not adding more to it.

  12. Anonymous says:

    Some labs have too many students easily 10+ students (graduate students+post docs) working with one PI. The review will benefit from including some criteria as to how many students can one PI reasonably give time to for training.

  13. Irv Weissman, Stanford University says:

    I have noted and sometimes tested over the past 4 decades whether grade point average, GRE /MCAT scores,and/or early involvement in research, best predict highly successful scientists in biomedicine.About 30 years ago we tested that at stanford medical school. A concensus choice of 50 of our top MD tenured faculty were asked to participate. We asked 3 questions, and got answers from all 50. The questions were:
    1. When you applied to medical school, roughly what percentile was your GPA?
    2. When you applied, roughly what percentile were your MCST results?
    3. What else was important?
    At that time, about 10-15% of these high achievers would have had their applications survive the GPA screen; the rest not.
    As for MCATs, There was a greater correlation, but alone it didn’t predict.
    A high percent, I believe 46/50, worked in research labs in high school or college in the summer, many at the summer RB Jackson labs in Bar harbor, and several at Cold Spring Harbor.
    I would recommend a similar approach to avoid projection and opinions rather than from data. Gather the top biomedical scientists and a similar number of age, gender, and working in jobs related to their broad graduate training, but not world leaders in research, and use a similar type of questionaire. .Given my current experience, I’d also try to ask which kinds of college they attended: Ivy or other prestigious colleges, well known state universities, lower competitive state colleges or 2 year schools, etc.

  14. De-Ann M Pillers, MDPhD, University of Illinois at Chicago says:

    The focus on the trainee is important, as well as their readiness. Dropping consideration of the institution is good. What I think has been lost in the shuffle is how to assess the mentor. The key concept foster on these awards is the mentor-mentee level. There needs to be a way to assess the mentor. This need not be based on how many grants they have, but rather, how much time do they have to commit to the trainee? What accomplishments have former trainees made in research? Becoming future administrators is nice, but we need investigators.

  15. Kelly T Dineley says:

    I have sponsored several NRSA applications and served on several training fellowship study sections. My thoughts thus far:
    Previous NRSA instructions emphasized independent careers in the biomedical sciences. Reading between the lines here and elsewhere, seems that emphasis has been sidelined. If so, make it clear in the instructions.
    It is difficult to ascertain whether these new guidelines and reorganization of the components will alleviate reviewer bias towards ‘famous’ sponsors and institutions. Yet this seems like a decent start.

  16. Richard Mailman says:

    I think the motivation for, and the resulting proposed changes, are excellent, and kudos to those who initiated this effort and participated in the process of revising the criteria.

    In the past, reviews of F applications too often were weighted heavily on a reviewer’s opinion of the research, and not the actual training environment. I saw funded F applications from world-class labs where I knew the trainee would hardly ever see the lab director, whereas promising applicants in a lab with a new assistant professor would not get funded even with an exquisite training plan detailed. The changes should provide much more balanced reviews.

  17. Philip Clifford says:

    Criterion #2 evaluates “the extent to which needed technical, scientific, and clinical resources are specified.” Criterion #3 should evaluate resources besides just the training philosophy and resources of the mentor. It should also include the professional development opportunities at the institution, through scientific disciplinary societies, and other resources. It’s 2023 and the old apprenticeship model where all of the training comes from the mentor is no longer adequate. Trainees need input from a broader range of resources to prepare for successful careers within academia and outside of academia. This approach should be encouraged in the application process.

  18. anonymous says:

    It is great that “grades are not allowed and should not be included.” This is a much needed change and is very much appreciated.

  19. anonymous says:

    This is a timely change. I hope it can address one of the most critical pitfalls of many fellowship reviews. Even after a pre-meeting reviewer training by SRO, I could still observe reviewers who treated the fellowship applications as R01 applications – i.e., expecting preliminary results and record of publications. This may attribute to usage of the same language and review templates to those of regular research applications. Emphasis on potential and preparedness will make reviewers remember that they are reviewing fellowship applications.

  20. anonymous says:

    My main concern is lowering the paperwork and administrative burden on the PI, applicant, and institution. The last (successful) one of these I assisted with the training plan and paper work were IMMENSE. I don’t think this is really valuable and just rewards those who have a boilerplate form to fill in. I think there are easier ways to get the information pertinent for review.

  21. J. Christopher States says:

    I think the proposed criteria are much better than the current set.

  22. carroll cross says:

    diversity of trainees should be one criteria for evaluation of training programs. this could be better established if training programs were better distributed to include more schools with a greater percent of minority university students.

  23. anonymous says:

    I agree 100%. This has been needed for many years. The lack of input from mentors on many applications at high profile institutions has been ridiculous. Also, applicants from “resource poor” undergraduate experiences were often at a huge disadvantage, especially when compared with applicants that had high school experiences in addition to very well resourced bachelor experiences.
    The changes to the sponsor and co-sponsor sections proposed could be game changers, again especially at small labs (where time for one on one training will be highlighted) and at huge labs (where there clearly is not one on one time). I also like the letters of support highlighting areas for improvement / training / areas for growth.

    One question left is whether the new scoring could be better as a combination (scored 3-29) rather than three individual scores of 1-9 and a composite 1-9 score.

  24. Andrea Bertke says:

    One recommendation is to review F31 and F30 applications by different review groups. F30 applications, which are required to include a significant clinical focus, are consistently reviewed as an F31 with reviewer comments of “too clinically focused.” Regarding the new review criteria, I have little confidence that reviewers will change their emphasis. They will simply change the location of their review or opinion of the sponsor and institution, within the new criteria.

  25. Thomas Schwarz says:

    What is lacking in these criteria is any consideration of how important and exciting the research is. Currently, study sections also, unfortunately, de-emphasize this crucial aspect and instead count how many flaws they can find (how many minor, moderate, major flaws). The existing system is, therefore, indeed broken since the bigger picture of how exciting the trainee’s project is quickly gets lost as the reviewers chop away at it by enumerating perceived flaws that are sometimes a sign of a high risk/high reward project. Alas, I do not see anywhere in the description of the new system any evaluation of the impact of the proposed work. The reward will be for proposing the most boring, safe project – as long as the resources are there to carry it out. Surely that is not a step forward and it is in strong contrast to the criteria for research grants which, at least in theory, are supposed to emphasize significance and innovation. Do you want to encourage scientific cowardice in our trainees? Do we really want them not to think creativity and significance matter?

  26. M. Mahmood Hussain says:

    Evalautions based on available scientific resources and training resources will be similar to evaluation of the PI and the institution. Well established PIs and Institutions will have better scientific resources and training resources. The undue influence of PI and institution will not be eliminated.

    People with less reources can also accomplish their stated goals.

  27. Dave Fernig says:

    I am not aware of the details and my point is likely already covered.
    It is important is for a trainee to have at least one mentor who is NOT connected to the sponsor and and institutional mentors, who can act as an advocate for the trainee. A number of UK schemes have implemented this and it has proven very effective.

  28. Nigel Paneth says:

    The only comment I see on the mentor’s qualifications is the following: “an evaluation of the training philosophy of the sponsor, their approach to training, time commitments, and their accessibility”. I think that the experience and track record of the mentor in providing training to young investigators should also be assessed. How many trainees has the mentor worked with? What have been their accomplishments? How often have young trainees co-authored papers with the mentor in the past?

    I also wonder how “training philosophy” and “approach to training” is to be evaluated. I imagine the only source is the mentor’s own statement, as viewed and interpreted by study section. This is not an arena with firm and clear standards. There are many approaches to training and philosophies of training, ranging from providing detailed guidance and supervision to permitting trainees to find their own way with guidance provided when requested. I suspect judgment of these grant elements by reviewers will reflect their own approaches. This thus seems to me a problematic criterion, in which personal perspectives (not to say prejudices) may outweigh objective evidence. I think it would be wise to replace this criterion with objective evidence of the mentor’s success. I would be more interested in knowing whether a mentor has succeeded in helping early stage investigators to achieve success than in trying to assess the mentor’s stated philosophy or approach to mentoring, which may or may not parallel actualities.

  29. Ari Berkowitz says:

    I served on a study section to review NRSA applications 19 times over many years. I was always frustrated by the downgrading of applications from labs and institutions that are not leading recipients of NIH funding. The net effect has been “them that has, gets.” I don’t know whether the proposed changes will improve the odds of applicants from labs and institutions that receive less NIH funding, but I certainly hope so.

Comments are closed.