Peer Review: what early-career researchers should know
A PhD candidate writes about a Voice of Young Science workshop
By James Steele Posted on 30 July 2013
[caption id="attachment_26292" align="alignleft" width="200"] James Steele[/caption]James Steele (@JamesSteeleII) is an Associate Lecturer and PhD candidate at Southampton Solent University in the UK, where he lectures on exercise physiology. He completed his BSc (Hons) in applied sport science in 2010 before moving straight into doctoral study, researching chronic low back pain from an exercise physiology and biomechanical perspective.
The Peer Review system is of particular interest to Steele as he strives to publish his PhD work and also in his role as an Associate Editor preparing the launch of the Journal of Evolution and Health, published by the Ancestral Health Society.
Here, he writes about a recent Voice of Young Science workshop. The VoYS program is organized by Sense About Science, a charitable trust based in the UK that aims to help the public to make sense of scientific and medical claims.[divider]
On July 5, I attended a Voice of Young Science workshop called Peer Review: The Nuts & Bolts. Attendees were invited to this free one-day event to find out about peer review, debate challenges to the system and discuss the role of peer review for scientists and the public.
Panelists were Dr. Mike Clemens, Visiting Professorial Fellow in Biochemistry and Molecular Biology at the University of Sussex; Dr. Michael Curtis, Editor-in-Chief of Elsevier’s Journal of Pharmacological and Toxicological Methods; and Dr. Irene Hames, Council Member and joint Editor-in-Chief, Ethical Editing, for the Committee on Publication Ethics (COPE) and independent editorial consultant and adviser to the publishing, higher education and research sectors.[caption id="attachment_26298" align="alignnone" width="800"] Panelists (left to right): Irene Hames, Committee on Publication Ethics (COPE) Council; Mike Clemens, PhD, Visiting Professorial Fellow in Biochemistry & Molecular Biology, University of Sussex; Julia Wilson, Development Manager for Sense About Science, who chaired the session; and Michael Curtis, PhD, Editor-in-Chief of the Journal of Pharmacological and Toxicological Methods[/caption]
The author’s perspective
Questions from the attendees for the panel at the workshop included:
- Does peer review illuminate new ideas or shut them down?
- Can a partially blind system really work?
- How do editors avoid bias?
[caption id="attachment_26297" align="alignright" width="432"] Irene Hames of the Committee on Publication Ethics (COPE) Council addresses participants in the Voices of Young Science workshoip.[/caption]
In discussion, there were mixed experiences among attendees of submitting work for review. Some had received constructive feedback from reviewers who picked up things they had missed which really improved the quality of their submissions. A fresh pair of eyes looking over something is a positive element of the peer review process. Sometimes we get too close to our work to notice certain issues. I currently have a paper which has undergone two rounds of review; each time the reviewers’ comments have substantially improved my work by highlighting things I had not considered.
In fact, in the Peer Review Survey 2009 conducted by Elsevier and Sense About Science, 91 percent of respondents reported that their last paper was improved through peer review.But not everyone has a review experience they feel is productive. A colleague and I spoke about our experiences with publishing work that might be considered “controversial” and how stages of the peer review system can shut down such ideas. At the workshop, some attendees commented that reviewers had not understood their work, that their criticisms were unjustified, and that they felt they were not afforded an opportunity to provide rebuttal.
The reviewer’s perspective
Many at the workshop noted that they were also there to learn how to review. Those who had reviewed all noted that they had not received training to do so. Some journals provide templates and questions to be answered by reviewers. Templates are helpful in some ways, but many of the questions asked can be of little relevance depending on the submission.
It seems that some kind of training is required for reviewers. Indeed one of the entries in the Peer Review Challenge was a Reviewer Guidance Program offering workshops and mentorship to early career researchers. A suggestion brought up in discussion was for reviewers to appraise the paper the same way they would if they were examining it for citation in their own work.
Is there another way?
While most agree that peer review is a good thing, they don’t necessarily agree that the present way is best. At the workshop, we discussed possible additions and amendments to the current system as well as different systems altogether. For example:
- Preferred/not preferred reviewers/Peer Choice – This system – which allows authors to inform editors about potential reviewers and why they may or may not be suitable – has been used by some journals for a long time. It’s an alternative to journal editors having to search for reviewers themselves on topics they may not be experts in. It can also address any potential negative biases but could enable positive biases as authors are likely to suggest reviewers they think will offer a favorable review. The opposite of this is that reviewers can select the manuscripts they wish to review, making it more likely that people who are qualified to review that particular manuscript are utilized.
- Cascading of manuscripts – This involves the publisher transferring a rejected manuscript to a journal that may be more suitable in scope. The BMJ group offers this service, and I have used it with some of my own papers when the editors thought the topic wasn’t suitable and, with my permission, transferred it for submission to a more suitable BMJ journal. Sometimes reviewers’ reports accompany these transfers, again speeding up the process (similar to the Streamline Reviews being piloted by Virology). This approach could greatly speed up the “trickle down” effect of publication and prevent authors from having to start over with each submission.
- Open Peer Review/Commentary – The traditional single blind system of peer review is often demonized for allowing reviewers to anonymously shoot down certain ideas. An open system removes the shield of anonymity. Reviewers are identified with the manuscript they reviewed, and in some cases even their reports are made public. The idea is to discourage reviewers from making unjustified criticisms and instead allows manuscripts and criticisms to stand purely upon their scientific merit. With the advent of more online-based journals, the notion of an online comments section attached to manuscripts is another way to allow further review from peers. Many publishers and journals, including BioMed Central and F1000Research, are using variations of this system, and it’s something that I am discussing with the other editors of the Journal of Evolution and Health as we continue to prepare for its launch.
- “Impact” assessment – In light of the above, scientifically valid ideas can be rejected based upon the supposed lack of “impact” or “interest” in the area. Thus a number of other new journals have opted for a peer review system whereby reviewers will only assess the scientific rigor of the methodology, results and conclusions drawn, allowing the “impact” of the piece to be established by the readers post publication. Journals such as PeerJ and PLOS ONE opt for this method of review and do provide substantive guidelines to reviewers to ensure they adhere to this.
But even if peer review can act as a powerful force of good for scientists and science as a field, the beneficiaries of the work – the public – are often unaware of its importance.
Peer review and the public
At the end of the workshop, we discussed some of what we look for when reading research claims in the media. Who conducted the research? Where was it conducted? What’s the aim of reporting it? Can we get access to the primary literature? And the big question: “Is it peer reviewed?”
That’s where Sense About Science comes in. Science is often shrouded in mystery to the general public. Many don’t know that peer review occurs or even what it actually is. From personal experience of talking with family members and friends, I find that they are often surprised at the rigor of the process I have to go through to have my research acknowledged and published. Sense About Science has produced a publication called I Don’t Know What to Believe to explain how peer review works and to encourage the public to ask for evidence behind claims they hear about.
A word of advice
Before signing off, I want to leave with two points of advice for early-career researchers like myself:
- Every experience of the peer review process can be a learning experience, whether you’re an author or a reviewer.
- Remember, authors, editors and reviewers are human, too.The peer-review system is not perfect, and we’re all subject to our own idiosyncrasies. But by recognizing that at the end of that anonymous manuscript/reviewer’s report is another human being who, if we go by the Peer Review Survey, likely has motivations of helping you, we can realize that opening a polite dialogue can go a long way to achieving the best outcome for everyone. Dialogue with the editor as an arbiter between authors and reviewers can allow for constructive discourse regarding submitted work and ultimately result in the most informed decision being made. If as an author or reviewer you disagree with a comment or decision, respectfully offer your rebuttal and you never know what may happen.