Elsevier Connect
Skip Navigation
Tools & Metrics

The changing face of journal metrics

As communication evolves, new methods help evaluate the impact of scholarly journals

The Authors

Mike Taylor
Mike Taylor
Judith Kamalski, PhD
Judith Kamalski, PhD

Mike Taylor has worked at Elsevier for 16 years, the past four as a technology research specialist for the Elsevier Labs group. In that role, he has been involved with the ORCID Registry. His other research interests include altmetrics, contributorship and author networks. Details of his research work can be found on the Elsevier Labs website. He is based in Oxford.

As Manager of Strategic Research Insights & Analytics at Elsevier, Dr. Judith Kamalski focuses on demonstrating Elsevier’s bibliometric expertise and capabilities by connecting with the research community. She is heavily involved in analyzing, reporting and presenting commercial research performance evaluation projects for academic institutes and government agencies. Based in Amsterdam, Dr. Kamalski has worked within several areas at Elsevier, including bibliographic databases, journal publishing, strategy, sales and, most recently, in the Research & Academic Relations department. She has a PhD from the Utrecht Institute of Linguistics.

A version of this article ran in the October 2012 issue of Editors' Update.


For decades, a principal measure of an article's impact on the scholarly world has been the number of citations it has received.1 An increasing focus on using these citation counts as a proxy for scientific quality provided the catalyst for the development of journal metrics, including Garfield’s invention of the Impact Factor in the 1950s2.

Journal-level metrics have continued to evolve and refine; for example, relative newcomers SNIP (Source Normalized Impact per Paper) and SJR (SCImago Journal rank) are now used on Elsevier’s Scopus.

Alt metrics chart Types of bibliometric indicators. First, second and third generation reproduced with the permission of Dr. Henk Moed and Dr. Andrew Plume from their article "The Multi-dimensional research assessment matrix" (2011); fourth and fifth added by the authors.
In recent years, however, interest has grown in applications at the author, institute and country level. These developments can be summarized in this chart. The Journal Impact Factor (JIF) was born at a time when there was one delivery route for scholarly articles – paper publications – and computational power was expensive.


The migration from paper to electronic delivery (particularly online) has enabled better understanding and analysis of citation count-based impact measurements and created a new supply of user-activity measurements: page views and downloads. Over the past few years, the growing importance of social networking — combined with a rising number of platforms making their activity data publicly available — has resulted in new ways of measuring scholarly communication activities: one encapsulated by the term altmetrics5. Although we have added these new metrics to this chart, we are not suggesting that superseding generations necessarily replace the earlier ones. In fact, the Relative Impact Measure is still used substantially, even though network analysis exists.

The choice of which metric to use is often influenced by the context and question and first, second or third generation metrics may still prove more suitable options. Although the word altmetrics is still relatively new (not yet 3 years old), several maturing applications already rely on data to give a sense of the wider impact of scholarly research. Plum Analytics is a recent, commercial newcomer, whereas Digital Science's Altmetric.com is a better established, partially-commercial solution.

A third mature product is ImpactStory (formerly total-impact.org), an always-free and open application. Altmetrics applications acquire the broadest possible set of data about content consumption. This includes HTML page views and PDF downloads, social usage, (e.g. tweets and Facebook comments), as well as more specialized researcher activities, such as bookmarking and reference sharing via tools like Mendeley, Zotero and CiteULike. A list of the data sources used by ImpactStory appears below. As well as counting activities surrounding the full article, there are also figure and data re-use totals. Altmetric.com also takes into account mass media links to scholarly articles.

Altmetrics chart

An example of the Altmetric.com donut, which can be found on many Scopus articles. This one, from the paper "How to Choose a Good Scientific Problem" in Molecular Cell, shows that (at time of writing) the article has been mentioned 89 times on a variety of platforms and saved as a bookmark by more than 4,000 people.

To get a feel for how altmetrics work, you can visit ImpactStory.it or Altmetric.com and enter a publication record. Alternatively, if you have access to Elsevier’s Scopus, you will find many articles already carry an Altmetric.com donut in the right hand bar (the donut may not be visible in older versions of Microsoft Internet Explorer). If there is no data yet available, an Altmetric.com box will not appear on the page. Elsevier also supplies data to ImpactStory, sending cited-by counts to the web-platform.

What do all these numbers mean?

Although there is some evidence to link social network activity, such as tweets, with ultimate citation count (Priem & Piwowar et al, 20126, Eysenback, 20117), this field is still in its early stages, and a considerable number of areas still require research. Further investigation aims to uncover patterns and relationships between usage data and ultimate citation, allowing users to discover papers of interest and influence they might previously have failed to notice. Planned areas of research include:

  • Scholarly consumption versus lay consumption. With so much benefit to be gained from encouraging public engagement in science, we need new ways of tracking this. After all, while members of the public are unlikely to cite articles in a formal setting, we may well see increased social sharing. Analysis of usage data might reveal striking differences between scholarly and lay usage patterns. For example, references to research amongst the general public may be primarily driven by mass media references – just as the mass media might be influenced by academic work going viral on Twitter and Facebook: whereas one might hypothesize that activity measured in specialized scholarly tools, such as Mendeley, would be less subject to this influence. This information could be critical in allowing publishers and platform owners to tweak their systems so as to best support use and report on wider usage to funding agencies.
  • When does social networking become marketing and when does it become gaming or cheating? There has been criticism8 that the JIF can be increased by excluding or including reference counts from certain types of articles, and by journals' self-citation policies. Social data is just as prone to influence. For example, while authors' tweets about their papers are perfectly legitimate social marketing of the type previously done through email groups, and while it's reasonable to assume that some mentions of this type will go 'viral' and thus be propelled towards mass media mentions and possibly drive citations, there will inevitably be concerted efforts to build momentum that goes beyond natural self/network marketing. A sophisticated analysis of social networking mapped against author networks might be able to detect and downplay this type of activity.
  • Data sources used by ImpactStory
    What other factors influence ultimate impact? As we expand our ability to understand what drives scholarly impact and how usage patterns should be interpreted, the scope should increase to include other non-social facets. For example, do cross-discipline papers get a wider readership than simply the disciplines targeted? Do papers with a lay abstract attract a wider lay audience? To what extent does the inclusion of a high-ranking contributor boost citation above what might be predicted?
  • Do any particular consumption activities predicate others? Is there a computable conversion rate for moving from one activity to another? How do these vary over time and by discipline? What activities lead to citation? Are there papers that are less well cited - or not cited at all - that nevertheless appear to have impact in other ways?
Altmetrics is still in its infancy, both as a field of study and a commercial activity. Currently only a handful of smaller organizations are involved and there is no engagement from major web players such as Google or Microsoft. On the publisher front, while all are active with altmetrics in some form, only Macmillan has chosen to get involved via Digital Science's Altmetric.com. That means there is a great deal to play for. We expect to see more emergent platforms and research, and it's not impossible to envisage the development of professional advisers who work with institutions to increase their altmetrics counts – especially now that impact is increasingly tied to funding decisions (e.g., government funding in the UK via the Research Excellence Framework).

Elsevier is fully engaged with the altmetrics movement. For example, in 2013 the Elsevier Labs team aims to co-publish large scale research that will begin to explore the relationship between the different facets and to establish a framework for understanding the meaning of this activity. It aims to build on the current work to found an empirically-based discipline that analyses the relationship between social activity, other factors and both scholarly and lay consumption and usage. By working together to combine knowledge at Elsevier, we intend to show that no single measurement can provide the whole picture and that a panel of metrics informed by empirical research and expert opinion is typically the best way to analyze the performance of a journal, an author or an article.

 

References and useful links

  1. For an overview of the Impact Factor and journal evaluation, see the Wikipedia article.
  2. "The Agony and the Ecstasy — The History and Meaning of the Journal Impact Factor," presented by Eugene Garfield at the International Congress on Peer Review and Biomedical Publication (2005)
  3. For more information on SNIP and SJR, see Elsevier website journalmetrics.com and Henk Moed’s interview on the SNIP methodology on YouTube.
  4. "The multi-dimensional research assessment matrix," by Henk Moed and Andrew Plume, Research Assessment, May 2011. 
  5. More on the altmetrics movement, conferences and workshops may be found at www.altmetrics.org
  6. "Altmetrics in the wild: Using social media to explore scholarly impact," by Jason PriemHeather A. PiwowarBradley M. Hemminger (March 2012)
  7. "Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact,” by Gunther Eysenbach for the Journal of Medical Internet Research (Oct.-Dec. 2011).
  8. "Show me the data," an editorial by Mike RossnerHeather Van Epps and Emma Hill for The Journal of Cell Biology (December 2007).


comments powered by Disqus