What researchers think about the peer-review process

Who is feeling the pressure the most?

Peer review is the cornerstone of scholarly publishing, but it can be a source of frustration for researchers. Criticisms levelled against it include claims that it slows down publication speed, consumes too much time and is inherently biased.

In this article, we draw upon the results of Elsevier studies that reflect the views of more than 3,000 active researchers. We not only examine their thoughts on the current peer-review landscape, but explore their expectations for the future.

What did we find out?

Most researchers – 70 percent – are happy with the current peer-review process; a satisfaction rate higher than those recorded in similar 2007 and 2009 surveys. When asked if peer review helps scientific communication, 83 percent of those we surveyed agreed, with comments such as, "I have had reviews that were very insightful. When researchers get their nose caught in the lab book, we cannot see the forest through the trees. Having a peer look at your science helps expand the overall view". (Researcher in Environmental Science, Switzerland, aged 36-45.)

However, there is room for improvement; a third of researchers believe that peer review could be enhanced. The improvements they suggested included reviewers being given more guidance, and incentives to encourage reviewers to volunteer their time. Moreover, 27 percent of respondents felt that peer review is holding back scientific research. A Computer Science researcher from the UK explained: "Peer review for journals is often slow and biased. Radical ideas will be rejected as peer review is inherently conservative."

Is pressure increasing on reviewers? Perhaps surprisingly, only 29 percent of researchers surveyed believed that to be the case. However, that is 10 percent more than in the survey conducted in 2009. Reasons cited include time pressure, lack of incentives, lack of specialist knowledge and too many (poor-quality) papers being sent for review.

If we take a look at which countries are bearing the brunt of increased pressure, it seems that US researchers are reviewing more papers than they are writing. On the flip side, Chinese researchers are writing substantially more papers than they are reviewing. The reason for this gap is that Chinese researchers are not being asked to review as frequently, even though they have one of the highest review acceptance rates for any country.

A slide from The Future of Peer Review presentation given by Elsevier's Research Director, Adrian Mulligan, at the ALPSP (Association of Learned and Professional Society Publishers) Seminar in November 2013. (Click on the image to open a larger version.)

Anonymity appears to be important to researchers when it comes to the review process.  We asked them about peer-review options ranging from double blind and single blind to various forms of open peer review and discovered that, as transparency increased, the likelihood of authors submitting papers decreased; 46 percent said they were unlikely to submit to a journal which published the reviewer's name alongside reviewer reports. It would appear some authors do not like the idea of their mistakes being made public. A slightly greater proportion – 49 percent – indicated they would be less likely to review on a journal that offered such open peer review.

Peer review has many purposes from improving the quality of a paper to detecting fraud.  Clearly, high expectations are placed upon the process and for the most part researchers believe peer review delivers. However, there are two areas where respondents indicate it is failing - detecting plagiarism and fraud.

Listen to our free peer-review webinar

The contributors to this article, Adrian Mulligan and Dr. Joris van Rossum, recently hosted a webinar The peer review landscape – what do researchers think? The archive version is now available to view.

What is Elsevier doing to address some of the peer-review issues identified?

There are a number of projects underway at Elsevier to support reviewers and the peer-review process. These include:

The new Reviewer Recognition Platform

This initiative provides participating reviewers with a personalized profile page where their reviewing history is documented. They are also awarded reviewer statuses based on the number of reviews they have completed for a specific journal – see Elsevier's Reviewer Recognition Platform prepares for next phase in this issue to discover how the project is evolving.

Integration of CrossCheck into EES

The plagiarism tool CrossCheck has now been integrated into EES (Elsevier's Editorial System) for a large number of journals. Once an author has submitted a manuscript, EES will automatically upload the editor PDF to CrossCheck's iThenticate website, where it will be checked against a huge database of publications. Editors can then view an automatically generated similarity report within EES. Over the past few months, several feature enhancements have been introduced and we will continue to roll out the tool with the aim of making it available to all journals by the end of 2014.

A new direction for the Article Transfer Service (ATS)

As outlined in the article How we can better support and recognize reviewers in Editors' Update Issue 42, for some time now, editors of 600 Elsevier journals have been able to suggest alternative journals to an author whose paper they have just declined. If the author agrees to submit to the new journal, any completed reviews can be transferred along with the paper.

In the article, we mentioned that a new experiment with six Elsevier soil science journals aims to improve on the service by offering editors who decide not to accept a paper, two important new options in EES; decline and reject. Decline simply says the paper is not suitable for the journal and allows the author to submit the paper – and reviews – to another soil science journal in the pilot. Reject means the author will not be invited to submit elsewhere.

Early results are proving encouraging with the number of manuscripts offered for transfer doubling under the new system. Dr. Joris van Rossum, Director Publishing Innovation at Elsevier, commented: "Not only do we see significantly more manuscripts offered a transfer, we also find many more authors accepting the offer to submit the article to another journal. We are hopeful that, in this way, authors will find the final destination for their manuscript faster and more efficiently."

Contributor biographies

Adrian MulliganAdrian Mulligan is a Research Director in Elsevier's Research & Academic Relations department. He has more than 15 years' experience in STM publishing and much of that time has been spent in research. He oversees Elsevier's Customer Insights Programs, ongoing tracking studies used to drive action in the business and help shape Elsevier strategy. Alongside these, Research & Academic Relations works in partnership with external groups to deepen understanding of the scholarly landscape across the industry. He has presented on a range of research-related topics at various conferences, including STM, ESOF, AAP, SSP, APE and ALPSP. Mulligan's background is in archaeology; he has a BA Honours degree and a master's of science from University of Leicester. He also has a diploma in Market Research from the Market Research Society.

Dr. Joris van RossumFor the past 12 years, Dr. Joris van Rossum has been involved in the launch and further development of many products and initiatives within Elsevier. From its inception, he worked as a Product Manager on Scopus, the largest abstract and citation database of peer-reviewed literature, and he worked on Elsevier's search engine for scientific information as Head of Scirus. Later, he developed the Elsevier WebShop, which offers support and services for authors at many stages of the publication workflow. In his current role as Director Publishing Innovation, Dr. van Rossum focuses on testing and introducing important innovations with a focus on peer review. He holds a master's of science in biology from the University of Amsterdam, and a PhD in philosophy from VU University Amsterdam.

Archived comments

han geurdes says: November 14, 2014 at 11:53 am
Interesting stuff about reviews. Thanks.

Anonymous says: November 14, 2014 at 3:56 pm
I'm surprised about the high review acceptance rate of China, if true. My experience is that Chinese manuscripts are rejected more often, and too often are not aware of earlier publications on the subject in question.

Jeffrey Robens says: November 15, 2014 at 12:28 pm
Very interesting insights. As I work with many non-native English speaking authors, they often find many reviewer comments confusing and difficult to understand, which I think is another important point to consider. One way to help improve peer review is for reviewers to be sure that their comments are direct and clear. This can also help save the journal time by ensuring the necessary revisions (hopefully!) will be taken care of during the first round and require subsequent rounds of clarification.

GDali says: November 18, 2014 at 7:52 pm
Bogus study. Where is the proof?

Joab says: November 18, 2014 at 7:55 pm
Where is the recent study mentioned? This is all hearsay.

Adrian Mulligan says: December 1, 2014 at 3:36 pm
The summary results in the presentation are based on a broader report, which will be made available in due course. The results presented build upon earlier research. These earlier studies have been published and the reports are available here:

Lawrence Lynn says: November 22, 2014 at 4:12 pm
The primary problem with traditional peer review by experts is described by Dr. Kuhn. Reviewers force researchers into their often failed paradigm. This article explains how that can slow vital research leading to stagnation for decades due to control of arbitrary gold standards.


Consider the failed acid theory of peptic ulcer disease. This was also maintained by Kuhn social forces.

We published in Open Access because it is a means to get the truth out. When one reads studies of sepsis in the traditional journal there is a component of propaganda as a function of the use of guessed gold standards held by the thought leaders. When their large RCTs fail to be reproducible (as they all have) they spin because Kuhn forces prevent introspection relevant the fundamental dogma.

Each group should formally promulgate their fundamental gold standards and criteria as possible dogma identifying and grading the research upon which the fundamentals are based. Then introspection (often from outside the discipline) can begin

Mark Eggerman says: December 1, 2014 at 10:22 pm
The reported results bundle together all disciplines. No base figures for respondents are reported, and no information is given on on how participants were recruited, refusal rates, etc. Basics matter. I would be interested in seeing data specifically for Elsevier journals in the social sciences, particularly submissions in these disciplines from "China."

Adrian Mulligan says: December 2, 2014 at 11:36 am
Details of the nature of the study are on slide 9 of the presentation deck in the webinar. The deck only shows the summary findings. Obviously time-constraints meant we were unable to show all the results. However, that said, where we found significant differences by subject or country, we high-lighted noteworthy differences in the deck that was presented. We do NOT have survey data that is specific to Elsevier journals as the survey was about perceptions of researchers generally.

The background details for the study are (I've added in one or two more details here, that were not in the webinar):
*3,008 researchers responded to a survey of individuals randomly selected from a database of 1.4 million published researchers (6% response rate). The respondents were representative of the broader academic community by country and discipline. We had 319 refusals to the study 0.6% of base.
*The study builds on earlier studies (PRC 2007, Sense About Science 2009) – see other remarks to access these studies
*Fieldwork took place August 2013. Individuals were approached by e-mail.
*Survey tool: Online survey (14-18 minutes)
*Statistical testing: Maximum error margin is ± 1.5% at 90% confidence levels. When comparing main group and sub-group we have used a Z-test of proportion to identify differences between the overall average and the sub-group (90% confidence levels).

If you want to know more about submissions to Elsevier journals in your area, specifically by geography, including China, please approach your publisher for the latest details.

comments powered by Disqus