Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Publish with us

A helping hand with finding reviewers: introducing the Elsevier reviewer recommender

June 15, 2018

By Navid Bazari

A helping hand with finding reviewers: introducing the Elsevier reviewer recommender

Our experience developing decision support tools for editors


At Elsevier, we want to support our editors, particularly when it comes to the tasks we know often prove difficult. One of these is the ongoing challenge of identifying suitable reviewers for submissions. In January 2017, we began work on developing valuable tools to support your editorial needs. It is now a little over six months since the successful beta release of the first tool that came out of this strategy - the EVISE reviewer recommender - so we would like to explain our approach, discuss the lessons learnt and share some of the feedback from your peers who are already using the new technology.

Understanding your needs

As with any potential new tool, a sensible approach is to start with a keen understanding of the problem before looking at possible solutions. In this case, feedback shared with publishers and comments/ratings from the editor satisfaction surveys allowed us to home in on a pattern for the key problems you face as editors.

Having constructed a hypothesis of the most important problems we hoped to address, we then got out of the office to test our assumptions by talking to editors about:

  • How you go about using data to manage your journal’s performance

  • How you find peer-reviewers for submissions

  • How satisfied you are with the tools available

The result from this feedback was clear. Despite many tools being available, finding reviewers remains one of the most challenging tasks you face.

Identifying the problem

Making use of the feedback we had collected, we then visualised the standard approach to finding reviewers using a technique called “user story mapping”. The resulting “map” showed the standard goals are to:

  1. Match a manuscript with a suitable candidate reviewer's qualifications/interest

  2. Remove candidates who have a potential conflict of interest

  3. Look for signals which might indicate the candidate reviewer's willingness to accept a request to review

We continued to develop the map, breaking each goal into sub-goals. This in turn led us to create a new system that responded powerfully to your needs.

Putting our system to the test

When creating systems like this, we are firm believers in iterative, evidence-based development. To ensure we were on the right track, we identified the biggest risk/assumption and designed a test to explore this. The test was in the form of a clickable “wireframe” of a new approach to reviewer identification, known as a “recommender”. After a few internal iterations of the design we went back out of the office to gather your feedback.

Our fieldwork showed us that it was time to validate whether we could actually build a system that identified good reviewers. Our hypothesis was that our software could, over time, substitute the “cold-start” editors usually faced when finding reviewers.

Looking at the technology

The tool we have developed has two parts. First it identifies suitable reviewer candidates. We match the submission's meta-data with all published research articles in the past five years, then we extract authors’ details for the top-matched articles, including their publishing and reviewing history. Finally, the tool filters out candidates based on our conflict of interest guidelines, for example removing known co-authors in the last three years.

Second, the tool generates a meaningful recommendation. We re-rank the candidates using a machine learning model that invokes the content similarity and some 10 other feature signals. For instance, the number of publications from the reviewer in the journal in question can be a strong indication of potential motivation.

The new tool is launched

In November 2017, we signed up 10 journals to take part in a “closed beta” of the tool. One user was immediately impressed tweeting this short reaction:

“Beta testing Elsevier’s new tool to identify potential reviewers for manuscripts based on ML algorithms. Wow. Scary good.”

We found editors adopted the tool, finding one in every three reviewers they needed with its help. And every two to four weeks we released enhancements based on new learnings.

With these successes we decided to move to an "open beta” release in April 2018. The tool is now available - on request - to all editors and journals on EVISE. As a result, over 500 more journals have started to use the recommender.

Where to next?

One of our goals is of course to make the tool available for all our journals however our vision does not end there. Ultimately, we want to build a platform that helps better allocate reviewers and saves editors time. We have already started work on the next steps, for example enabling reviewer “signals” such as days since last review or periods of leave. Do stay tuned for further updates. If you would like a demo of what's available now please visit our YouTube channelopens in new tab/window, or if would like to enable the recommender for your journal, please contact your publisher.


Navid Bazari


Navid Bazari