Measuring a journal’s impact
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment.
On this page we introduce some of the most popular citation-based metrics employed at the journal level. Where available, they are featured in the “Journal Insights” sidebar on Elsevier journal homepages (for example), which links through to an even richer set of indicators on the Journal Insights homepage (for example).
CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s largest abstract and citation database of peer-reviewed literature. CiteScore itself is an average of the sum of the citations received in a given year to publications published in the previous three years divided by the sum of publications in the same previous three years. CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other indicators to allow comparison. CiteScore metrics are transparent, comprehensive and current, with the scores and underlying data for more than 23,000 journals, book series and conference proceedings freely available at www.scopus.com/sources.
SCImago Journal Rank (SJR)
SCImago Journal Rank (SJR) is based on the concept of a transfer of prestige between journals via their citation links. Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. Like CiteScore, SJR accounts for journal size by averaging across recent publications and is calculated annually. SJR is also powered by Scopus data and is freely available alongside CiteScore at www.scopus.com/sources.
Source Normalized Impact per Paper (SNIP)
Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts for field-specific differences in citation practices. It does so by comparing each journal’s citations per publication with the citation potential of its field, defined as the set of publications citing that journal. SNIP therefore measures contextual citation impact and enables direct comparison of journals in different subject fields, since the value of a single citation is greater for journals in fields where citations are less likely, and vice versa. SNIP is calculated annually from Scopus data and is freely available alongside CiteScore and SJR at www.scopus.com/sources.
Journal Impact Factor (JIF)
Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of the sum of the citations received in a given year to a journal’s previous two years of publications (linked to the journal, but not necessarily to specific publications) divided by the sum of “citable” publications in the previous two years. Owing to the way in which citations are counted in the numerator and the subjectivity of what constitutes a “citable item” in the denominator, JIF has received sustained criticism for many years for its lack of transparency and reproducibility and the potential for manipulation of the metric. Available for over 11,000 journals, JIF is based on an extract of Clarivate’s Web of Science database.