Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

Lifting the lid on publishing peer review reports: an interview with Bahar Mehmani and Flaminio Squazzoni

January 28, 2019

By Christopher Tancock

Two of the authors of a recent Nature study on open peer review explain their findings and how this affects reviewer behaviour

© istockphoto.com/vladwel Bahar Mehmani(opens in new tab/window) (BM) is Reviewer Experience Lead at Elsevier; Flaminio Squazzoni(opens in new tab/window) (FS) is full Professor of sociology in the Department of Social and Political Sciences at the University of Milan. In this article, Christopher Tancock(opens in new tab/window) interviews the pair about their recent article in Nature Communications(opens in new tab/window) on the topic of open peer review reports.

  1. So you’ve published a recent paper with Nature(opens in new tab/window) that provides some interesting insights into open peer review… What made you look at this area?

    FS: This is a paper which took years to get off the ground. It stems from work with PEERE(opens in new tab/window) and BM. At the core was the issue of sharing journal data in a safe, responsible manner. This was one of the most significant pieces of work from PEERE, others are in store and hopefully will follow soon. There had been a lot of discussion in the academic press on the topic of making peer review (PR) more transparent and accountable. BM had suggested we needed to look at data. There was (and is) a great deal of interest in the potential of open PR under the umbrella of open science/open access, along with lots of debate about its potential benefits. What this project does is to test these assumptions with statistical, evidence-based analysis. There has been other analysis of what works and doesn’t with open PR. But analysis is more robust, responsible and reliable when conducted using cross-journal data direct from publishers as happened here. The procedures and collaboration were really good and should serve as standard in future for researching PR.

  2. How long was the study?

    BM: I started the pilot with five journals in November 2014 based on a small experiment with one of them in 2012. The pilot ended in December 2017. What I shared was the data from five journals’ peer review databases. This was part of a wider data sharing process conducted under a PEERE data sharing protocol whereby 70 Elsevier journals in agreement with their editors contributed.

    The idea was to run the pilot and use the data to see what impact (if any) there was on the submission rates and behaviour of reviewers in open PR. This had always been very difficult to do as there are so many variables. We had to consider how to combine data to accurately model the impact of this model of peer review on referees’ behaviour. Before any of this, however, we first had to develop a proper data sharing protocol(opens in new tab/window). This took an additional two years to do! There was lots of back-and-forth between Elsevier, Springer-Nature, Wiley, the Royal Society and PEERE. Happily, we finally celebrated the signing of a comprehensive protocol between all parties in 2017.

  3. What were you looking for?

    BM: We were basically interested in testing the anecdotal evidence about open PR. One hears a lot about open PR but there’s not much actual evidence! There was no large-scale study on the topic. With our study, we were examining how open PR impacts acceptance, completion rate, how referees write, what type of recommendations they make and what effect is felt from the decision on whether to reveal identity or not…

    FS: If I may add, I’d agree on the above. Furthermore, there was another important aspect: to look at all factors systematically. There have been other articles on the impact of aspects of open PR, but here we could look at them all together, systematically and coherently. We were also able to compare what happened with these five journals with external data on similar journals that did not switch to OPR. This we used as a “control” to test and validate our findings and was very important. One must also be mindful of the need to consider the context – this allows you to interpret results more accurately… For example, declining reviewer acceptance is a general trend, so not necessarily specific to these five journals and this test. Often existing studies claim points based on limited data and suggesting generalizations. Being robust in this way allowed us to be confident in the interpretation of our results.

    One other thing to note… The academic landscape works against us in completing this sort of work requiring years – nowadays the trend is about fast work, fast publications, more citations, etc. Our type of study required slow, careful work by all the team involved, especially Francisco Grimaldo(opens in new tab/window) (who is our data master from the University of Valencia, and leading figure in PEERE) and Giangiacomo Bravo(opens in new tab/window) (our great data scientist from Linnaeus University in Sweden)!

  4. What did you expect to find?

    FS: We were expecting more negative results, to be frank. We anticipated a more pronounced effect from the trial. But we didn’t get one! We didn’t note a significant an effect from open PR on referees’ willingness or turnaround time. About the only negative issue was that only around eight per cent of reviewers agreed to reveal their names. Interestingly, these [resulting reports] were often positive… Received wisdom told us that switching to OPR would lead to reviewers being sceptical, hesitant, taking longer to perform reviews but this not true as our new study shows! The issue of referees being reluctant to reveal their identities is not surprising really, but this shows that open PR is possible and sustainable when you bear this in mind.

  5. What is the key take-home message from your research?

    FS: I’d say that open PR is a very important innovation, but it needs to be coherent with incentives and approvals that underpin the world of academia! You can’t implement a PR system without considering the human factor. One needs to remember that scholars are embedded in a competitive environment with complex reward systems. Therefore, you need to establish a system which recognizes and is sensitive to those systems and which considers the human and social factors.

    BM: From my point of view it’s the need to consider system analysis to test any assumptions as well as the importance of working with independent researchers to do so.

  6. Should journals and publishers be more proactive and engage with open peer review faster do you think?

    FS: Well, you know here in Italy, we like slow food not fast food! Maybe it’s the same with this topic… I think it’s better to proceed slowly and carefully. Let’s take time to develop evidence-based, careful research. We can then analyse the effect of manipulating PR across publishers and across domains. For example, open PR is not generally accepted in the humanities and social sciences, where individual research is dominant. Confidentiality is a big issue in that field. As a result, it’s important not to generalize and impose standards that do not suit certain communities.

  7. Is there a plan for a follow up or “next step” after the article?

    BM: Well, we’re very interested in looking at behavioural aspects of those who did reveal their identity. We want to look at their motivations.

    FS: Indeed! We’re already looking closer at the eight per cent - in fact I just saw some preliminary statistics from Giangiacomo and the team yesterday about this. What we already found is a correlation between type of recommendation and the decision to/not reveal one’s ID. It was mainly positive from those who revealed. ALL cases where the recommendation was to reject didn’t reveal however. We want to use this analysis for a simulation model to look at an even broader scale and play with the parameters to try and judge the impact on behaviour.

  8. Is there any advice you’d offer journal editors and reviewers as a result of your study? FS: Yes! Sharing internal data in a responsible, responsibly-managed/designed way is to be encouraged!

    BM: Thinking about referees, I have myself recently reviewed and decided to reveal my ID. After all, I dedicated lots of time and effort. I therefore wanted to be open to promote them. After all, I was probably one of the only people at that point who had carefully read the paper (twice!). Being a reviewer openly gives you the opportunity to promote the paper at conferences and on social media. FS: Ah, but what if you had recommended rejection?!

    BM: Ha ha! Well, I’d admit that I am not in the normal academic context here, so I don’t have anything to lose but I can empathise with the motivations of referees who do feel they need to stay anonymous. I want to learn more about this behaviour!

Contributor