When reviewing goes wrong: the ugly side of peer review
23 March 2018
By Christopher Tancock
Illustrating some of the most common ways that things can go wrong during peer review – and what to do if this happens
"The corruption of the best things gives rise to the worst." - David Hume
In the beginning
Academic peer review took its first steps in 1665 with the publication of the Philosophical Transactions of the Royal Society under the watchful eye of its first editor, Henry Oldenburg (1618-1677). In order to successfully carry out its mission to ensure that “the honor of ye [author’s] invention will be inviolably preserved to all posterity” (Oldenburg, 24 November 1664), it was determined that “… [articles in the Society’s Philosophical Transactions should be] first reviewed by some of the members of the same” (Royal Society Order in Council 1/3/1665). The peer review system has remained essentially unchanged since that time. Nonetheless, the intervening 350 years have seen some laudable modifications and enhancements to the system – and some lamentable corruptions. As editor, you may well have come across some of these phenomena; others may be new to you. Regardless, we hope that this article will serve to ensure that, forewarned; you are forearmed.
Before we begin, it is perhaps helpful to note the context in which these situations take place. The research community has become hyper competitive, which stems in large part from the intense pressure on publication and the associated impact on evaluation and progression. In some countries, this pressure comes from institutes or funding agencies; in others directly from government. Another significant contributor to the issue is the increasing difficulty of finding suitable reviewers with the requisite expertise, time and willingness.
With most if not all the following, prevention is of course better than cure! To that end, we offer a selection of advice after each issue (you might also want to view third-party recommendations such as this one(opens in new tab/window)):
When things turn ugly
One of the most common issues that affects reviewing negatively in the modern age stems from the practice of soliciting author-suggested reviewers. As with many things, the intention is good: identify a wider network of reviewers, especially for more specialist areas, by asking an obvious expert – the author, who should be ideally placed to suggest suitable authorities and perhaps save the editor a good deal of leg work. Alas what we see now and again is that the reviewers suggested turn out to be fake. Or, they are indeed real and appropriate authorities for the topic area but are suggested along helpfully with “their” email addresses, which have been faked and end up leading back to the author (and a positive review, of course). On occasion the fraud is laughably obvious but at times the deception is carried out with disturbing verisimilitude.
Apparently there are also agencies that "handle the peer review process" for authors(opens in new tab/window) which then construct fake reviewer accounts. It suffices to say that this can have significant impact on journals if these agencies sense a weakness – they will act quickly to exploit. This has lead, in a couple of notorious cases, to journals having to retract over 100 papers. Obviously, such an incident would be extremely serious for the reputation of any journal and could even lead to being de-listed from indexing services such as Scopus and Web of Science
Even ignoring the many serious risks for the journal as above, there are other indicators why the practice of soliciting reviewer suggestions might do more harm than good. The editors of the Journal of Neurochemistry published a study(opens in new tab/window) based on >1,000 submissions, indicating that an article was 2.4 times more likely to be recommended for acceptance by an author-suggested reviewer compared to a non-author-suggested reviewer. Unsurprisingly, the journal decided to stop using this option based on this research.
How to handle fake reviewers
The recommended best practice is to always ensure, if you do employ any author-suggested referees, that papers are also reviewed by additional reviewers. Otherwise, use Scopus(opens in new tab/window) to check the validity of any reviewers suggested by authors. Ask yourself the following:
Is the institute listed against the reviewer’s name credible?
Is the email address provided a third party one (e.g. @hotmail/gmail/yahoo)?
Are there any indications of a conflict of interest? For example, does the reviewer share the same affiliation as the author?
Is the reviewer a subject expert? A quick check of the reviewer’s history in Scopus should answer this question.
Is the reviewer a regular co-author with the corresponding author? Again, a quick check of the reviewer’s history in Scopus should verify this.
Carrying out these simple checks will go some way towards ensuring fake reviewers are caught prior to being invited and you will be pleased to hear that we are currently working on automating the majority of these checks to save you time and effort (stay tuned to Editors’ Update to hear more!). Author feedback regarding submission on EES and EVISE indicates that suggesting reviewers is the second most painful submission step for (honest) authors. Is it worth the potential hassle therefore? We think not. If you agree and would like to explore alternative ways of securing trusted reviewers (e.g. volunpeers); please get in touch with your Publisher.
Citation manipulation is another malfeasance that causes grief during the peer review process. Manipulation comes in two main flavours. The first is where an editor wants authors to cite more papers from their journal (typically in a misguided attempt to “game” the citation indices of the journal). This phenomenon has been observed by various commentators and a recent article on the subject can be found here(opens in new tab/window). The second variant is when reviewers require authors to mass-cite their papers regardless of their relevance to the manuscript at hand, a topic on which the Committee on Publication Ethics (COPE) convened a forum(opens in new tab/window) in 2012.
How to handle citation manipulation
When it comes to potential citation manipulation by reviewers, this underlines the need to read carefully through the recommendations made by the reviewer. If there is something that seems excessive or inappropriate: challenge it or edit it from the review. For our part, we have ensured that guidance for reviewers on this topic is included with their instruction letters as well as being present in our reviewers’ hub.
Conflicts of interest
Anything can be taken to extremes but there are clearly plenty of occasions when someone shouldn't really be reviewing a paper(opens in new tab/window). Sometimes however, potential competing interests aren’t easily visible and only reveal themselves at a later stage. On this topic, be wary of papers which seek to add further author names after reviewer recommendations – this has impacted several journals and is illustrated in this COPE case on adding an author after publication(opens in new tab/window).
How to handle conflicts of interest
Doing detective work using Scopus(opens in new tab/window) and/or whatever resources you prefer will be your best bet in terms of flagging up potential conflict. This is one area where it pays to do a bit of homework to ensure that you’re not creating extra work for yourself down the line by opting for an apparently “easy” choice which later turns out to be conflicted. Again, we have provided guidance for reviewers on this topic on reviewers’ hub and elsewhere(opens in new tab/window).
Not necessarily a wilful breach of responsibilities but potentially deleterious in its own way is bias. The Lancet Psychiatry recently published an editorial about gender bias here(opens in new tab/window) and there have been many similar and insightful studies e.g. here(opens in new tab/window) and here(opens in new tab/window). These can be quite specific for example this study regarding reviewer and editor bias against Chinese authors(opens in new tab/window).
How to deal with bias
There has been a great deal of focus in recent months on bias in all fields of human endeavour including academic research. Be aware for the possibility of bias in reviewers as well as your own potential and have a look at resources such as this and this(opens in new tab/window).
Perhaps the most brazen (and outrageous) act is that of a reviewer stealing a paper (after recommending rejection) – i.e. rejecting but then submitting their “own” paper on the same topic to another journal.
How to deal with stealing papers
This is obviously not an easy situation to spot (unless said reviewer were foolish enough to submit the second article to your journal) but it could raise an eyebrow if a reviewer rejects an obviously decent paper in contrast to your (and other referees’) opinions.
We hope that these illustrations and tips will prove useful to you in your work as editor. The types of situation we’ve described above are, sadly, on the rise so it pays to be aware of what can go wrong. Elsevier’s continued guidance to editors is to be alert to potential abuse of the peer review process which is rare but serious. Review patterns are now also actively monitored and editors alerted of any potentially high-risk patterns. To address the underlying issue of engaging and rewarding reviewers, we are heavily invested in supporting our editors by developing best-in-class tools. These include a new system for finding independent reviewers, including a new reviewer recommender tool which is currently being beta tested by several journals. We are equally committed to ensuring that legitimate reviewers receive the maximum recognition for their invaluable contributions via our reviewer recognition program and publishing reviewer reports by expanding our successful open peer review pilot.
We will continue to publish articles and briefings to guide you in your work and help to avoid sticky situations but you might find it useful to consult our reviewers’ hub as well as to keep the Publishing Ethics Resource Kit bookmarked for ease-of-use.