Subscribe Subscribe to Elsevier Connect

5 steps to building a better recommendation engine

Two Elsevier software engineers reveal how they help you find the most relevant articles for your research

Anna Bladzich, Senior Data Engineer at Elsevier, presents at the Spark+AI Summit in London. (Photos by Annelies van Dam)
Anna Bladzich, Senior Data Engineer at Elsevier, presents at the Spark+AI Summit in London. (Photos by Annelies van Dam)

Editor’s Note: Anna Bladzich and Adam Davidson, senior data engineers behind ScienceDirect’s recommendation engine, presented at the Spark+AI Summit in London on Learning to Rank with Apache Spark: A Case Study on Production Machine Learning. Here, they take you behind the scenes with their work at Elsevier.


Anna BladzichAs the amount of published research grows every year, it gives researchers more opportunities to build on each other’s work. However, that opportunity comes at a price – having to navigate a huge volume of content.

Adam DavidsonThat’s where recommendation engines come in. Just as Spotify and Amazon give you suggestions based on your previous choices, a platform like ScienceDirect can present relevant articles for your continued research.

Of course, everyone has examples of where Amazon or Spotify have made strange or contradictory recommendations, and that’s because building a useful and effective engine is not easy. We caught up with Anna and Adam to find out the key steps in creating a successful tool.

Check out our tech jobs

1. Know why you’re doing it.

Anna: The whole ethos behind it is that we want to improve the research experience. There’s more and more research out there, more to pay attention to. ScienceDirect has millions articles and 14 million users, so for every one who logs on, there’s a lot of relevant information. What we’re trying to do is make the research experience more efficient and find the most relevant articles so researchers can make an informed choice as to what to follow up on.

Adam: Yeah, it’s about driving efficiency through those recommendations. When a researcher is in ScienceDirect, they’ll see a pane on the righthand side with recommended articles based on what they’re currently reading. We see a lot of numbers about how often people choose to click on those, and we’re finding that our new system gives people what they need significantly more often.

Anna: I get a massive kick out of that! I get feedback from people saying, ‘I’ve been in this field for years, and you’ve just recommended a very significant article which I never would have found otherwise.’ That’s where the feel-good element comes in – when you get feedback like that, it’s fantastic.

2. Build your algorithm.

Anna: For the ScienceDirect article recommender, for the first step we use collaborative filtering. So for one data set, you have tens of millions of articles in ScienceDirect, which we might recommend. For the other, you have 14 million users of ScienceDirect and the information on which articles they’ve viewed or downloaded.

If you’re looking at an article, we want to be able to recommend to you other articles which are somehow related. We do this by comparing our users’ browsing histories and building a co-occurrence matrix of all the articles people browsed together. The ones that have been viewed together most often are the ones we serve as recommendations on the page. It’s quite a tough act to process the data and choose only three out of tens of millions, but technologies like Apache Spark, which work on clusters of independent machines in a divide and conquer manner, make that possible.

Adam: Collaborative filtering is similar to Amazon’s ‘people who bought this also bought this’ feature. Those connections between users are the building blocks of what we do.

3. Spot the algorithm’s weaknesses and improve on them.

Adam Davidson presents on Apache Spark and production machine learning at the Spark+AI Summit in London.

Adam: Collaborative filtering is clever in some ways and dumb in others. So, for example, it doesn’t care about when something was published, it doesn’t care about what journal it was in, and it doesn’t care about what the citation network was like. That is all information we can draw on to improve recommendations.

Anna: Right – one of the weaknesses of collaborative filtering is that things that have been very popular in the past will be considered to be more significant. Articles that have been added recently will slip further down the list because not many users would have had the chance to view them. To mitigate those limitations, we apply some interesting machine learning techniques.

Adam: One of those is Learning to Rank. So, for example, users tell us they have a preference for articles that were published more recently. In the Learning to Rank model, we can look at the publication date of an article and give it a certain significance. We might also look at things like the content in the abstract and give that a numerical feature we can add to the model.

4. Listen to your users.

Adam: One of the things that has really helped us is A/B testing our approach all the way through. We can deploy something and test it with the community, and it’s their usage and feedback that helps shape the result. You can see in the numbers whether people are engaging with the new version of the recommendations, and the community essentially tells you whether you’re helping them in a better way than before.

Anna: Before running an A/B test, we also use offline evaluation: we make tweaks to the model, and then run that model on data set aside for testing. That helps us predict what the live results will be.

5. Love it.

Anna: The fun part of the job really comes with every new project – you get the project goal and you find yourself thinking, ‘What can I tease out of these datasets? What algorithms can I use that will help me get to the goal?’ It’s where we get to put our scientist hat on and run lots of experiments to see which combination of data and algorithms will give the best results for our users.

Adam: When you look at all the data sets we can combine across Elsevier, from platforms like Scopus, ScienceDirect, Mendeley and SciVal, you can connect the dots and create something that’s really personalized and really useful for people.

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Comments


comments powered by Disqus