The Effect of Open Access upon Citation Impact

Does open access publishing increase citation rates?

Does open access publishing increase citation rates?

Studies conducted in this area have not yet adequately controlled for various kinds of sampling bias.  Read on...

Henk F MoedThe debate about the effects of open access upon the visibility or impact of scientific publications started with the publication by Steve Lawrence (2001) in the journal Nature, entitled ‘Free online availability substantially increases a paper's impact’, analyzing conference proceedings in the field computer science. Open access is not used to indicate the publisher business model based on the ‘authors pay’ principle, but, more generally, in the sense of being freely available via the Web. From a methodological point of view, the debate focuses on biases, control groups, sampling, and the degree to which conclusions from case studies can be generalized. This note does not give a complete overview of studies that were published during the past decade but highlights key events.

In 2004, Stevan Harnad and Tim Brody (2004) claimed that physics articles submitted as pre-print to ArXiv (a preprint server covering mainly physics, hosted by Cornell University), and later published in peer reviewed journals, generated a citation impact up to 400% higher than papers in the same journals that had not been posted in ArXiv. Michael Kurtz and his colleagues (Kurtz et al., 2005) found in a study on astronomy evidence of a selection bias – authors post their best articles freely on the Web -  and an early view effect – articles deposited as preprints are published earlier and are therefore cited more often. Henk Moed (2007) found for articles in solid state physics that these two effects may explain a large part, if not all of the differences in citation impact between journal articles posted as pre-print in ArXiv and papers that were not.

In a randomized control trail related to open versus subscription-based access of articles in psychological journals published by one single publisher, Phil Davis and his colleagues (Davis et al, 2008) did not find a significant effect of open access on citations. In order to correct for selection bias, a new study by Harnad and his team (Gargouri et al., 2010) compared self-selective self archiving with mandatory self archiving in four particular research institutions. They argued that, although the first type may be subject to a quality bias, the second can be assumed to occur regardless of the quality of the papers. They found that the OA advantage proved just as high for both, and concluded that it is real, independent and causal. It is greater for more citable articles then it is for less significant ones, resulting from users self-selecting what to use and cite. But they also found for the four institutions that the percentage of their publication output actually self-archived was, at most, 60%, and that for some it did not increase when their OA regime was transformed from non-mandatory into mandatory. Therefore, what the authors labeled as ‘mandated OA’ is in reality, to a large extent, subject to the same type of self selection bias as non-mandated OA.

On the other hand, it should be noted that all citation based studies mentioned above seem to have the following bias: they were based on citation analysis carried out in a citation index with a selective coverage of the good, international journals in their fields. Analyzing citation impact in such a database is, in a sense, a bit similar to measuring the extent to which people are willing to leave their car unused during the weekend by interviewing mainly persons on a Saturday at the parking place of a large warehouse outside town. Those who publish in the selected set of good, international journals – a necessary condition for citations to be recorded in the OA advantage studies mentioned above – will tend to have access to these journals anyway. In other words: there may be a positive effect of OA upon citation impact, but it is not visible in the database used. The use of a citation index with more comprehensive coverage would enable one to examine the effect of the citation impact of covered journals upon OA citation advantage; for instance: is such an advantage more visible in lower impact or more nationally oriented journals than it is in international top journals?

Analyzing article downloads (usage) is a complementary and, in principle, valuable method for studying the effects of OA. In fact, the study by Phil Davis and colleagues mentioned above did apply this method and reported that OA articles were downloaded more often than papers with subscription-based access. However, significant limitations of this method are that not all publication archives provide reliable download statistics, and that different publication archives that do generate such statistics may apply different ways to record and/or count downloads, so that results are not directly comparable across archives. The implication seems to be that usage studies of OA advantage comparing OA with non-OA articles can be applied only in ‘hybrid’ environments, in which publishers offer authors who submit a manuscript both an ‘authors pay’ and a ‘readers pay’ option. But this type of OA may not be representative for OA in general, as it disregards self-archiving in OA repositories that are being created in research institutions all over the world.

An extended version of this paper will be published soon in the Elsevier publication Research Trends.


Davis, P.M., Lewenstein, B.V., Simon, D.H., Booth, J.G., Connolly, M.J.L. (2008). Open access publishing, article downloads, and citations: Randomised controlled trial. BMJ, 337 (7665), 343-345.

Gargouri, Y., Hajjem, C., Lariviére, V., Gingras, Y., Carr, L., Brody, T., Harnad, S. (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PLoS ONE, 5 (10), art. no. e13636.

Harnad, S., Brody, T. (2004). Comparing the impact of open access (OA) vs. non-OA articles in the same journals. D-Lib Magazine, 10(6).

Kurtz, M.J., Eichhorn, G., Accomazzi, A., Grant, C., Demleitner, M., Henneken, E., Murray, S.S. (2005). The effect of use and access on citations. Information Processing & Management, 41, 1395–1402.

Lawrence, S. (2001). Free online availability substantially increases a paper's impact. Nature, 411 (6837), p. 521.

Moed, H.F. (2007). The effect of “Open Access” upon citation impact: An analysis of ArXiv’s Condensed Matter Section. Journal of the American Society for Information Science and Technology, 58, 2047-2054.

Archived comments


Stevan Harnad says: March 22, 2012 at 7:59 pm

No study based on samples and statistical significance-testing has the force of an unassailable mathematical proof.

But how many studies showing that OA articles are downloaded and cited more have to be published before the ad hoc critiques (many funded and promoted by an industry that has something of an interest in the outcome!) and the hopeful special pleading tire of the chase?

There are a lot more studies to try to explain away here:

Most of them just keep finding the same thing…

(By the way, on another stubborn truth that keeps coming back despite untiring efforts to say it isn’t so: Not only is OA research downloaded and cited more — as common sense would expect, as a result of making it accessible free for all, rather than just for those whose institutions can afford a subscription — but requiring (mandating) OA self-archiving increases OA self-archiving. Where on earth did Henk get the idea that some institutions’ self-archiving “did not increase when their OA regime was transformed from non-mandatory into mandatory”? Or is he just referring to the “mandates” that state that “you are required to self-archive only if and when your publisher says you may self-archive, and not if they say you may only self-archive if you are not required to’? See: )

Linda Willems says: March 23, 2012 at 4:06 pm

Thanks very much for taking the time to comment. Henk is travelling for work at the moment so please bear with us as there will be a slight delay before he will be able to respond.

Phil Davis says: March 24, 2012 at 5:27 pm

Thank you for highlighting our randomized controlled trial of open access publishing. Our study was expanded to 36 participating journals in the sciences, social sciences and humanities and citation performance was reported after 3 years.

Articles placed in the open access condition (n=712) received significantly more downloads and reached a broader audience within the first year, yet were cited no more frequently, nor earlier, than subscription-access control articles (n=2533) within 3 years. These results were consistent across all 36 journals. Please see:

Davis PM. 2011. Open access, readership, citations: a randomized controlled trial of scientific journal publishing. The FASEB Journal 25: 2129-34.

comments powered by Disqus