Elsevier Connect

Industry Issues

San Francisco Declaration on Research Assessment (DORA) – Elsevier’s view

Elsevier supports those elements of DORA that reflect long known problems with Impact Factors, and in which we have been actively supporting a range of alternatives and best practices. Elsevier is not signing DORA in its entirety, however, as it’s not our place to advocate for positions that are primarily aimed at other partners in the research community. Mendeley is signing DORA on its own.

The San Francisco Declaration on Research Assessment (DORA)

The Annual Meeting of the American Society for Cell Biology convened in San Francisco, California, in December 2012. Among the more than 100 scientific sessions and 3,000 poster presentations, a group of scholarly journal editors and publishers met to discuss an issue which has long pressed on many of the brightest minds in their field. This was not a purely scientific problem, but one which they nonetheless felt had the potential to affect the evaluation – and so the conduct – of research in cell biology and beyond.

What emerged from that meeting is the San Francisco Declaration on Research Assessment (DORA), the main thrust of which is best encapsulated in one of several editorials published to coincide with DORA's release last week:

The Impact Factor is the most popular numerical measure of a scientist's work. Despite many well-documented flaws, the Impact Factor is commonly used in recruitment, appointment, and funding decisions. A diverse group of stakeholders is now making a concerted effort to combat misuse of the Impact Factor and is calling for the development of more accurate measures to assess research.1The general recommendation of DORA is that all stakeholders in the research community:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions.

The Declaration goes on list specific recommendations to key groups, including funding agencies, institutions, publishers, organizations supplying metrics, and researchers themselves. At the time of writing, DORA has more than 4,000 individual signatories and over 160 organizations have also signaled their support.

Elsevier's view of the use of metrics in research assessment

Elsevier welcomes an informed and open debate on the role of journal-level metrics in research assessment, given the impact such exercises have on all stakeholders in scholarly communication.  We also welcome the opportunity to partner with others to improve current systems or craft new and better systems and to insure they are used appropriately.Before turning to the recommendations aimed specifically at journal publishers and providers of metrics relating to articles and journals, it is important to note more broadly that Elsevier has taken a clear position on metrics in research assessment in general, and the Impact Factor in particular, for many years. In 2000, Elsevier published a white paper discussing the 'use and abuse' of Impact Factors, at the same time offering a detailed exploration of the caveats around the calculation of the metric also listed in DORA:

The … impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers, and even the institution they work in. This pamphlet looks at the limitations of the impact factor, how it can and how it should not be used.2

More recently, we (along with respected experts in the field of citation analysis and research assessment) responded to published criticisms3 as follows:

… we discuss the value of journal metrics for the assessment of scientific-scholarly journals from a general bibliometric perspective, and from the point of view of creators of new journal metrics, journal editors and publishers. We conclude that citation-based indicators of journal performance are appropriate tools in journal assessment provided that they are accurate, and used with care and competence.4 

Elsevier agrees unreservedly with the statement made by Thomson Reuters in 2008 that: "Perhaps the most prominent misuse of the Journal Impact Factor is its misapplication to draw conclusions about the performance of an individual researcher."5

We offer clear guidance for editors, authors and readers on the appropriate use and interpretation of the Impact Factor on our Journal Metrics website, alongside details of other journal- and author-level metrics which can provide a complementary perspective on journal impact and relevance. Last December, I posted a webcast titled "The Impact Factor and other Bibliometric Indicators" here on Elsevier Connect.

Finally, Elsevier has recently invested in two further advances in delivering a range of metrics to these groups (see boxed text).

[note color="#f1f9fc" position="center" width=800 margin=10]

Journal Insights pods

The Journal Insights pod is available on many Elsevier journal homepages (see for example here). Alongside the traditional Impact Factor and the 5-year variant, there are also Eigenfactor and Article Influence scores and the Scopus-derived SNIP and SJR. Next to these journal-level citation metrics are presented peer-review and publication speeds and an overview of the international reach of the journal (as indicated by the geographic spread of recent authors).[/note]

Article-level metrics in ScienceDirect, Scopus and Mendeley

Article-level citation metrics are available in Elsevier's full-text platform ScienceDirect, sourced from our abstract and citation index, Scopus. Scopus itself also features the Altmetric app, which tracks mentions of each article in traditional and social media online. Finally, within the researcher productivity and collaboration platform Mendeley (acquired by Elsevier in April), the need to assess research regardless of the journal that published it is met by providing aggregated readership data of papers as a complement to both citations and downloads.

Elsevier's position on specific recommendations

Turning now to address the DORA recommendations aimed specifically at publishers such as Elsevier, the Declaration makes two recommendations with a direct bearing on research assessment:

For publishers

6. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, Eigenfactor, SCImago, h-index, editorial and publication times, etc.) that provide a richer view of journal performance.

7. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.As noted in the previous section and the boxed text, Elsevier has already taken proactive steps towards these first two of these recommendations long before the creation of DORA, and will continue to lead the way among the publishing industry in this regard. Several further recommendations are directed at suppliers of metrics (for which Elsevier qualifies with Scopus, the SciVal product suite and the newly-acquired Mendeley platform):

For organizations that supply metrics

11. Be open and transparent by providing data and methods used to calculate all metrics.

12. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.

13. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.

14. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared.

On the first two of these points, the methodology underpinning the two journal-level metrics that are computed using Scopus data (SNIP and SJR) has been published in the peer-reviewed literature, and the metrics themselves are freely available on our Journal Metrics website for downloading and re-use.

On the last point, SNIP (and especially the newly-revised method now available) intrinsically accounts for differential citation practices in different fields of research, allowing the direct comparison of SNIP values across all journals.

Regarding the manipulation of journal metrics, in 2012 we published an article in our Editor's Update newsletter on Impact Factor ethics, particularly on the issue of journal self-citation and the methods by which in appropriate levels are achieved. Elsevier's position on Impact Factor 'gaming' was made clear:

Elsevier uses the Impact Factor (IF) as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.

Mendeley on DORA

For us, the important part of DORA is addressing the misuse of the Impact Factor in researcher assessment. We hear from our researchers that they want a better way to assess research, wherever it may have been published.

Mendeley meets this need by sharing aggregated data on the readership of papers on Mendeley as an open and more rapid complement to citations or downloads. Any metric can be gamed, of course, but when you have a panel of correlated metrics, it gets exponentially harder to do so, and that's why we support the researcher assessment aims of DORA, even as we agree that the Impact Factor retains value as an indication of journal influence.

DORA contains several recommendations aimed at research funders, institutions and researchers. While Elsevier takes great interest in these recommendations, given that their target audiences comprise many of Elsevier's key partners in the research ecosystem, we do not feel it appropriate for Elsevier to take a position on those points.

For this reason, Elsevier does not intend to sign to the entirety of the Declaration but we appreciate why others would do so (including Mendeley, see box). Instead, we reiterate that Elsevier welcomes this development in the spirit of an open and informed debate on the future of scholarly research assessment and looks forward to engaging even more deeply with the originators of DORA, its signatories, and those groups to whom it is addressed.

[divider]

References

1 Misteli, T. (2013) "Eliminating the impact of the Impact Factor." The Journal of Cell Biology. Retrieved from http://jcb.rupress.org/content/early/2013/05/21/jcb.201304162.full on 24 May 2013.

2 Amin, M and Mabe, M. (2000) "Impact Factors: Use and Abuse." Perspectives in Publishing No. 1. Retrieved from http://cdn.elsevier.com/assets/pdf_file/0014/111425/Perspectives1.pdf on 24 May 2013.

3 Vanclay, J.K. (2012) "Impact Factor: Outdated artefact or stepping-stone to journal certification?" Scientometrics 92 (2), 211–238. This article was also cited in DORA.

4 Moed, H.F., Colledge, L., Reedijk, J., Moya-Anegon, F., Guerrero-Bote, V., Plume, A., Amin, M. (2012) "Citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way." Scientometrics, 92 (2), pp. 367-376.

[divider]

The Author

[caption id="attachment_24091" align="alignright" width="200"]Andrew Plume, PhDAndrew Plume, PhD[/caption]Dr. Andrew Plume is Associate Director – Scientometrics & Market Analysis in Research & Academic Relations at Elsevier, specializing in scientometrics (the scientific qualification and analysis of science). Through accumulating a broad spectrum of data that ranges from specific primary sources such as authors and single articles to the broadest data resources generated from countries and entire subject domains, he studies information flows in the scholarly literature by analyzing patterns of publications and citations. His particular interest lies in the use (and abuse) of the Impact Factor and the emergence of alternative metrics for journal evaluation. Dr. Plume frequently presents these topics, among others, to journal editors, learned and scholarly societies, and the publishing community.
 
After taking his PhD in plant molecular biology from the University of Queensland (Australia), and conducting post-doctoral research at Imperial College London, Dr. Plume joined Elsevier in 2004. He has co-authored research and review articles in the peer-reviewed literature and is a member of the editorial board of Research Trends."
 
The science of science is more than simply crunching numbers for their own sake," he writes. "It's about detecting patterns amongst the noise that tell us something about the fundamental workings of scholarly communication. Science is a very - arguably the most - human pursuit, and we see that emerge in our analyses every day."



comments powered by Disqus

2 Archived Comments

Carlos Romo Kröger June 26, 2013 at 2:59 pm

In determining the impact factor by the only one variable of the number of readers does not seem right, we should also look at the degree of specialization of publication, etc.

Reply
Terry grimmond June 28, 2013 at 12:19 am

Until a better, free, system is available I am happy with IF with one caveat - it has to be freely available (no cost) and not controlled commercially as it is now.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *