Citation metrics and open access: what do we know?

A look at the literature reveals shortcomings in the way OA and subscription models are being compared and suggests how future studies could build on existing research to provide a more accurate picture


Knowing the impact of your research is important, particularly for career advancement, funding applications, and demonstrating the reach and significance of your work. Because impact can be measured in many ways, we take a holistic approach at Elsevier, analyzing metrics at the journal, researcher and article levels so researchers can be recognized for multiple forms of contribution and impact. Mendeley Stats is one example of how Elsevier is helping researchers understand the impact of articles they publish, using data from Scopus, ScienceDirect, Mendeley and NewsFlo.

Citations are a well-established measure of research impact; a citation can mean recognition or validation of your research by others. But does how you publish give you a citation advantage? And is your article more likely to be cited and in greater numbers if it’s published or made available as open access?

Open Access has many advantages, not least the potential to increase the reach and visibility of your research. We know open access is important to our authors, which is why we offer a range of publication options which reflect our support for both gold and green open access. But what else can open access do for your research and indeed for your career? When it comes to citations at least, we do not yet know.

Based on our close review of the available evidence, there is no clear citation advantage for open access articles, or at least not one that can be identified as solely being related to an article’s access status. Whilst early studies suggested a causal relationship between article OA status and higher citation counts, subsequent studies identify weaknesses in the methodology used in earlier studies where a citation advantage had been found. For example, McCabe and Snyder (2013) show that claims of an OA citation advantage are a byproduct of failure to control for other factors, specifically article quality. Another recent study by Hua et al (2016) looking at citations of open access articles in dentistry found no evidence to suggest that open access articles receive significantly more citations than non-open access articles. Instead, articles describing basic science research, case-control studies, randomized controlled trials, and systematic reviews were significantly more likely to be cited three times or more, whilst articles describing case reports/series were significantly less likely to be cited three times or more. In dentistry at least, the type of article you publish seems to make a difference but not OA status. Other studies have shown similar results. For example, Tahamtan et al (2016) found article type (review, letter to editor, short communication, etc.), study topic and team size to be among the strongest variables to correlate with higher impact citations, and Davis (2011) found no overall citation advantage for open access articles. The Davis study is the only one to date that addresses this topic using the most robust study design available – a randomized controlled trial – which is considered the gold standard methodology in medical research to reduce selection bias.

What do they mean by “open access”?

Defining “open access” is crucial to any study of this nature. Some studies, including some of those mentioned above, focus on papers published in fully gold open access journals or that are indexed and later found to have accessible full text. But in the latter case, it isn’t always clear whether the full text was published via the gold route, has been made available via the green route, or is actually an illicit posting i.e. posted at a time, location or in a version contrary to the journal’s policy. Without clear definitions of open access, it is difficult to know what “OA status” tells us in and of itself, let alone in relation to citations. Also, the timing of that availability with relation to the citations is often unclear. For example, if an author were to post their accepted manuscript of a paper published in 2011 yesterday and woke up today to see that it had accumulated 30 citations since publication, the author could not claim that OA caused those citations – but a study conducted according to the above methodology today might imply so.

Selection bias and other methodology pitfalls

A recent study by Science-Metrix and 1science suggests that open access papers do have a citation advantage and that, on average, open access papers are associated with a 50 percent higher research impact (presumably from citations, although it is not clear if it is from citations alone) than those that are published under the subscription model and which are not subsequently self-archived and made available green open access.

What is selection bias?

Selection bias is the selection of units of analysis (such as participants in a clinical drug trial, or  papers in the present case) for analysis without achieving randomization. Without proper randomization, studies run the risk of analyzing unrepresentative samples, which may in turn produce different results to those that would have been found had a random – and therefore more representative – sample been used.

The same challenges with defining open access are present in this paper too, alongside a well-documented problem for those studies in particular that claim a citation advantage for open access articles: selection bias. This study, like many others on this topic, does not appear to be randomized and controlled, which is important to ensure our conclusions really tell us what we think they do.

In this study, everything made available on publishers’ platforms is assigned to the gold category, which potentially captures open archive content published under the subscription model but later made available by publishers – another form of green open access. It may also capture temporarily available articles in promotional issues and/or selected articles under schemes such as “editor’s choice.” Green open access is defined just as narrowly: as papers that are free elsewhere, which may include illicit postings.  It is unclear what impact this would have on the results of the study, but it is worth considering – and also asking this question for future studies: Is there a stronger citation advantage for green open access papers on publishers’ platforms relative to green OA papers fragmented across multiple repositories?

A further methodological weakness is that it is not clear whether and how calculation of the citation impact of green open access papers distinguishes between citations the article received before an embargo period and citations received after the article was self-archived and made publicly accessible after the embargo period. This approach ignores the possibility that these papers may start to receive high citation rates soon after publication before they are self-archived and made available green open access.

There is also no information on timing, i.e. when articles were posted, which may again have a bearing on how we might interpret the results. These methodological flaws may further account for the findings that are not in line with previous reports.


Given our interest in open access publishing and to ensure we are offering the best support and guidance to our authors, this is an area of research that is of great interest to us. What we have learned from our review to date is that studies looking into this important area should be careful when it comes to selection bias and control for factors such as research quality, article type and team size, as these are essential to giving us a clear and accurate understanding of citation advantage claims. It is essential that open access is defined clearly and accurately, and that other controls are included, such as controlling for citation advantages that may be associated with particular funders or within particular disciplines. Evidence shows that NIH-funded papers, for example, appear far more often than expected in top-cited article populations. The same may be true for other funders and may skew results by showing an OA advantage when really this is a proxy for a particular funder or the quality of the research it typically funds.

We look forward to reading further research on this important topic.

Mendeley Group: Open access compared with subscription articles

This public Mendeley Group lists articles that examine citation, download and other bibliometric/altmetric indicators for open access articles compared with subscription. We encourage you to investigate the articles therein, and to add articles of relevance so other people can explore these and make up their own mind. The Scholarly Kitchen has also posted a critical overview of citation advantage studies, which is worth considering as you read through the evidence.

Mendeley Stats

Mendeley Stats is a free service that enables you to:

  • Get detailed insight about how your articles are being read, shared and cited.
  • View contextual mentions of your articles in the media.
  • See the search terms used to find your article and what readers of your article are also reading.


Written by

Gemma Hersh

Written by

Gemma Hersh

As Elsevier’s VP of Open Science, Gemma Hersh is responsible for developing and refreshing policies in areas related to open access, open data, text mining and others. Gemma also travels around the world to meet with government officials, institutions, funders and others to build, strengthen and maintain relationships and discuss areas of mutual interest. In the UK, Gemma serves as publisher representative on the Universities UK (UUK) Open Access Monitoring Group and is a member of the International STM Public Affairs Committee.

Before joining Elsevier, Gemma was Head of Public Affairs for the UK Publishers Association and has worked in the creative industries both in government and in industry for the last seven years. She holds an MPhil in Politics and Comparative Government from Oxford University, but her real love is history, in which she holds a First Class Degree from Kings College, London.
Written by

Andrew Plume, PhD

Written by

Andrew Plume, PhD

Dr. Andrew Plume specializes in applying scientometric techniques (the scientific qualification and analysis of science) to market and competitive intelligence in scholarly publishing. From the lowest levels of aggregation, such as individual authors and articles, through to entire countries and subject domains, Andrew studies the growth and development of the scholarly literature by analyzing patterns of publications, citations and related indicators. Andrew frequently presents these topics to journal editors, learned and scholarly societies, and the publishing community.

After receiving his PhD in plant molecular biology from the University of Queensland, Australia, and conducting post-doctoral research at Imperial College London, Andrew joined Elsevier in 2004. He has co-authored research and review articles in the peer-reviewed literature and was a member of the editorial board of Research Trends (published 2007-14).

Related stories


comments powered by Disqus