Elsevier Connect
Skip Navigation
Peer Review

What makes a good peer reviewer? The answer is not obvious

Here's what researchers have discovered about reviewer expertise and training

[note color="#f1f9fc" position="center" width=800 margin=10]

The Author

[caption id="attachment_16597" align="alignright" width="101"]Michael L. Callaham MD Michael L. Callaham, MD[/caption]

Dr. Michael Callaham is Chair of the Department of Emergency Medicine and Professor of Emergency Medicine at the University of California, San Francisco (UCSF) School of Medicine. He is also Editor-in-Chief of Annals of Emergency Medicine, the official journal of the American College of Emergency Physicians.  He received his MD from UCSF in 1970 and carried out his residency in emergency medicine at the University of Southern California Medical Center in Los Angeles. He is a member of the Institute of Medicine of the National Academy of Sciences.

As a result of his editing and publishing experience, his research interests have turned to trying to better understand the scientific peer-review publication process through research into methods of educating peer reviewers, as well as research into bias and its impact on scientific publication.

This article first appeared in Reviewers' Update. [/note]

Quality peer reviewers play a major role in the quality of the science a journal publishes, and many journals have trouble finding a sufficient supply of reliable ones. Therefore it would be valuable for journals to know what characteristics identify a good reviewer in advance and how to improve their skills once they are reviewing. In the past decade, our understanding of this topic has deepened, but the results are not encouraging.

It would be very desirable for editors to be able to identify high quality reviewers to target for recruitment, or at the time of recruitment, to help weed out those who will not perform well. Several studies, one including 308 reviewers and 32 editors, showed that factors such as special training and experience (including taking courses on peer review, academic rank, experience with grant review, etc.) were not reflected in the quality of reviews subsequently performed by reviewers. There was a trend towards better performance in those who had a degree in epidemiology or statistics, as well as those who had already served on an editorial boards. Several papers found that more experienced reviewers (> 10 years out of residency) performed more poorly, but for all these variables, the relationship was weak and the odds ratios were less than 2.

Therefore, if we cannot identify good reviewers in advance, perhaps we can train them to perform good reviews once on board. A number of studies have examined the impact of formal reviewer training, most of them focusing on the traditional half day voluntary interactive workshop format. In all these studies, attendees were enthusiastic about the workshop training, felt it would improve the quality of their subsequent reviews, and performed better on a post-test of their understanding of peer review. Unfortunately, even when compared to controls with similar previous volume and quality ratings, none of these predictions came true and the objective quality scores of attendees did not change at all. At the journal in these studies, this led to abandonment of these methods, with however a subsequent steady rise in review quality due to other interventions.

These failures led to study of more substantial interventions that would still be reasonable logistically for a journal to implement. One involved increased feedback to reviewers, who were not only given explicit information about what was expected in the review, but also received copies of other reviews of the same manuscript with the editor’s rating of each of those reviews, a copy of a truly superb review of a different manuscript, as well as being told the rating they received on their actual review. These interventions (carried out on about 4 reviews for each subject) had no significant impact on subsequent quality performance. Finally, a recent study identified volunteer mentors among reviewers who had the highest performance ranking for review quality, matching them up with randomly selected reviewers new to the journal and encouraging them to discuss each review by phone or email. Like previous studies, for reasons of practicality this typically involved only 3 or 4 reviews per subject, and like other interventions it had no effect compared to the control group who received no special effort.

We can conclude that so far none of the fairly easy approaches to reviewer training have been shown to have any effect, probably because the amount of feedback and interaction needed to teach the complex skills of critical appraisal is much greater than the time allotted to this task by editors and senior reviewers.

What then is a poor editor to do? We cannot identify good reviewers in advance, and we can’t train them in any relatively easy, low-resource fashion. This makes it all the more crucial to adopt a validated and standardized editor rating of review quality and use it on all reviews. This allows identifying reviewers by quality performance, and then periodic stratifying of those reviewers and steering more reviews to the good ones, has been shown to have a significant effect on the quality and timeliness of reviews as a whole. All this, of course, assumes that one has enough reviewer raw material to make choices, which unfortunately is a luxury many smaller journals do not possess. [divider top="0"]

Studies in this article

Callaham, M.L., Green, S., Houry, D., (2012). Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Medical Education 12(83) doi:10.1186/1472-6920-12-83.

Callaham, M.L., Knopp, R.K., Gallagher, E.J. (2002). Effect of written feedback by editors on quality of reviews: two randomized trials. JAMA 287(21):2781-3, PMID: 12038910.

Steven M. Green, MD, Michael L. Callaham, MD, (2011) Implementation of a Journal Peer Reviewer Stratification System Based on Quality and Reliability. Annals of Emergency Medicine 57(2), 141-148.

Callaham, M.L., Tercier, J. (2007). The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med 4(1): e40. doi:10.1371/journal.pmed.0040040, PMID: 17411314.

Callaham, M.L., Wears, R.L., Waeckerle, J.F. (1998). Effect of Attendance at a Training Session on Peer Reviewer Quality and Performance. Annals of Emergency Medicine 32(3), Part 1



comments powered by Disqus

2 Archived Comments

Catriona Fennell, Elsevier January 15, 2013 at 11:03 am

Chrissy, thanks for your comment, it really illustrates that there’s an art to understanding different reviewers’ motivations!

Paying referees is certainly one of many possible solutions to improve the timeliness of peer review. The wealth of ideas out there for incentivizing and recognizing reviewers was reflected in Elsevier's recent Peer Review Challenge which received 800 entries describing all kinds of innovative approaches.



Some of our journals already pay reviewers and the editors of the ‘’Journal of Public Economics’’ have conducted <a href="http://editorsupdate.elsevier.com/2012/08/refereeing-behavior-and-the-determinants-of-altruism" target="_blank" rel="nofollow">fascinating research comparing the effect of different types of incentives on review times</a>. However, advice from our editors and continuous surveying of reviewers tends to indicate that lack of financial compensation is not the biggest obstacle to timely, insightful reviews. Rather, we hear that the biggest obstacles are sheer lack of time and receiving very low-quality papers or papers which don’t match the reviewers’ expertise.

To that end, we encourage editors to have a suitable “Desk Reject” policy to ensure reviewers only receive papers of reasonable quality and we created ‘’Reviewer Finder’’ to help editors find the perfect reviewer for each paper. We also get great responses to publicly thanking reviewers by inviting them to special receptions, listing them on the journal website and providing them with 30 days’ free access to ScienceDirect and Scopus per review. In addition, Elsevier offers Reviewer Workshops to new reviewers and we keep reviewers up-to-date via our <a href="http://www.elsevier.com/reviewers/reviewers-update" target="_blank" rel="nofollow">Reviewers’ Update newsletter</a>.

We will continue to listen closely to editors about what best motivates reviewers in each community and combine new with ‘tried and tested’ approaches.

Reply
Chrissy Prater January 10, 2013 at 2:59 pm

Companies that provide consistently high-quality independent peer reviews also have quite a bit of experience dealing with this problem. I manage the peer review division at my company, and we have had relatively good success in the realm of monetarily compensated peer reviews with the feedback/show examples method utilizing young and eager scientists at the post-doc and early faculty career levels. Giving feedback is somewhat time intensive, but specialized web tools and well-trained management staff equipped with standard protocols have made this easier. When the individual is keen to improve his/her skills and gain experience reviewing at a professional level, they are eager to please, particularly in a paid model. We've found that most of the reviewers at this level significantly improve their work on the first review post-feedback. However, this method definitely does not work for most reviewers who are further advanced in their careers. Many (but not nearly all) of these late-career scientists seem unwilling to provide constructive criticism at the desired depth, and some individuals admit to us that they are either just unwilling to change, or they do not have time to provide better reviews, or they insist that what is "good enough" for the typical journal reviews that they are used to doing should be good enough for our company (which is not even remotely true when you're a company that is providing a service directly to an author - so, thanks, journals, for letting reviewers think that these crummy reviews are acceptable). We prefer not to send reviews to scientists with this outdated "drive-by" review mentality, which means reviewer recruitment is a full time job for our division. After working as a Managing Reviewer for the last four years, I am convinced that a modest, yet fair, reviewer compensation model works, and there are plenty of potential peer reviewers out there who are ready and willing to do high-quality work. They just may not be in the places that you are looking. I will take a fresh post-doc reviewer who knows his/her stuff and will write a detailed and thoughtful summary over a department chair drive-by review any day. When you compensate reviewers, you have the leverage to demand better reviews. Reviewers who submit crummy or late reviews after being asked to make improvements do not receive additional review opportunities from our company. Journals who are having trouble identifying and holding reviewers who perform consistently well might consider outsourcing peer reviews to trusted companies who specialize in doing just that.

Reply

There was an error creating your comment:

  • Unable to determine where to create this Comment [CMS0033]

Leave a Reply

Your email address will not be published. Required fields are marked *