An update on research regarding reviewer expertise

About the author

mcallaham101Dr. Michael L. Callaham, M.D. is Chair of the Department of Emergency Medicine and Professor of Emergency Medicine at the University of California, San Francisco (UCSF) School of Medicine. He is also Editor-in-Chief of Annals of Emergency Medicine, the official journal of the American College of Emergency Physicians.  He received his MD in Medicine from UCSF in 1970 and carried out his residency in Emergency Medicine at the USC Medical Center, Los Angeles CA. He is a member of the Institute of Medicine, National Academy of Science.

As a result of his editing and publishing experience, his research interests have turned to trying to better understand the scientific peer review publication process through research into methods of educating peer reviewers, as well as research into bias and its impact on scientific publication

Quality peer reviewers play a major role in the quality of the science a journal publishes, and many journals have trouble finding a sufficient supply of reliable ones. Therefore it would be valuable for journals to know what characteristics identify a good reviewer in advance, and/or how to improve their skills once they are reviewing. In the past decade our understanding of this topic has deepened, but the results are not encouraging.
 

It would be very desirable for editors to be able to identify high quality reviewers to target for recruitment, or at the time of recruitment, to help weed out those who will not perform well. Several studies, one including 308 reviewers and 32 editors, showed that factors such as special training and experience (including taking courses on peer review, academic rank, experience with grant review, etc.) were not reflected in the quality of reviews subsequently performed by reviewers. There was a trend towards better performance in those who had a degree in epidemiology or statistics, as well as those who had already served on an editorial boards. Several papers found that more experienced reviewers (> 10 years out of residency) performed more poorly, but for all these variables, the relationship was weak and the odds ratios were less than 2. 

Therefore, if we cannot identify good reviewers in advance, perhaps we can train them to perform good reviews once on board. A number of studies have examined the impact of formal reviewer training, most of them focusing on the traditional half day voluntary interactive workshop format. In all these studies, attendees were enthusiastic about the workshop training, felt it would improve the quality of their subsequent reviews, and performed better on a post-test of their understanding of peer review. Unfortunately, even when compared to controls with similar previous volume and quality ratings, none of these predictions came true and the objective quality scores of attendees did not change at all. At the journal in these studies, this led to abandonment of these methods, with however a subsequent steady rise in review quality due to other interventions.
 

These failures led to study of more substantial interventions that would still be reasonable logistically for a journal to implement. One involved increased feedback to reviewers, who were not only given explicit information about what was expected in the review, but also received copies of other reviews of the same manuscript with the editor’s rating of each of those reviews, a copy of a truly superb review of a different manuscript, as well as being told the rating they received on their actual review. These interventions (carried out on about 4 reviews for each subject) had no significant impact on subsequent quality performance. Finally, a recent study identified volunteer mentors among reviewers who had the highest performance ranking for review quality, matching them up with randomly selected reviewers new to the journal and encouraging them to discuss each review by phone or email. Like previous studies, for reasons of practicality this typically involved only 3 or 4 reviews per subject, and like other interventions it had no effect compared to the control group who received no special effort.
 

We can conclude that so far none of the fairly easy approaches to reviewer training have been shown to have any effect, probably because the amount of feedback and interaction needed to teach the complex skills of critical appraisal is much greater than the time allotted to this task by editors and senior reviewers. 
 

What then is a poor editor to do? We cannot identify good reviewers in advance, and we can’t train them in any relatively easy, low-resource fashion. This makes it all the more crucial to adopt a validated and standardized editor rating of review quality and use it on all reviews. This allows identifying reviewers by quality performance, and then periodic stratifying of those reviewers and steering more reviews to the good ones, has been shown to have a significant effect on the quality and timeliness of reviews as a whole. All this, of course, assumes that one has enough reviewer raw material to make choices, which unfortunately is a luxury many smaller journals do not possess.

Studies referred to in this article are:

Callaham, M.L., Green, S., Houry, D., (2012). Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Medical Education 12(83) doi:10.1186/1472-6920-12-83.
 

Callaham, M.L., Knopp, R.K., Gallagher, E.J. (2002). Effect of written feedback by editors on quality of reviews: two randomized trials. JAMA 287(21):2781-3, PMID: 12038910.
 

Steven M. Green, MD, Michael L. Callaham, MD, (2011) Implementation of a Journal Peer Reviewer Stratification System Based on Quality and Reliability. Annals of Emergency Medicine 57(2), 141-148.
 

Callaham, M.L., Tercier, J. (2007). The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med 4(1): e40. doi:10.1371/journal.pmed.0040040, PMID: 17411314.
 

Callaham, M.L., Wears, R.L., Waeckerle, J.F. (1998). Effect of Attendance at a Training Session on Peer Reviewer Quality and Performance. Annals of Emergency Medicine 32(3), Part 1

<< back to issue 12 content | go to Experimenting with new forms of peer review >>