Journal and article metrics
This page is intended to provide you with definitions of bibliometric indicators used to measure the influence of journals, as well as to guide you on their uses and limitations. In addition, we will discuss Article-Level Metrics (ALMs), which are a new approach to quantifying the reach and impact of published research.
Scopus has selected several bibliometric indicators to measure the influence of journals. On the first tab, you can read about IPP (Impact per Publication), SNIP (Source Normalized Impact per Paper) and SJR (SCImago Journal Rank). We also examine the h-index, which is a measure of an individual's performance.
On the second tab we discuss the Impact Factor, which is currently the bibliometric indicator most commonly used by researchers and research management.
However, criticism of the Impact Factor has led to the development of new initiatives and indicators. On tab three, we explain the other metrics included in Thomson Reuters' Journal Citation Reports, such as the Immediacy Index, Cited Half-life and Eigenfactor.
We also touch on how you can monitor citation trends using tools such as the Scopus Journal Analyzer in tab four.
Tab five talks about Article Level Metrics (citations, usage, and altmetrics), and the Elsevier initiatives we have developed around them. On the journal homepage the Journal Insights pod gives a powerful visualization of five years of historical data. It takes into account eight metrics available within three key groups (quality, speed, authors) developed to aid authors' decision making regarding which journal to submit to.
My Research Dashboard
Introducing My Research Dashboard
Imagine having publication readership data within days of publication!
Anyone who has published knows that it has traditionally taken years before learning how well a publication is doing.
My Research Dashboard is a breakthrough service that allows you as an author to understand in greater detail and with greater speed how your publications are being read, shared and cited. This is a free service that replaces Usage Alerts and CiteAlerts, available exclusively to all Elsevier authors like you.
My Research Dashboard allows you to measure quickly and easily the impact of your research, which can be invaluable when applying for funding or when seeking a new position or a promotion. It provides:
- Early feedback about how your publication is being downloaded, shared and cited
- Data on where in the world and what discipline your readers are in
- Detailed information about how your publications are being discovered
My Research Dashboard is the first of a range of services that we at Elsevier are developing to better support you and your research goals.
How it works
Connecting the power of Scopus, ScienceDirect and Mendeley, your personal Dashboard captures citation data for all of your publications published in any journal, as well as usage data for all of your publications published in Elsevier journals.
The Dashboard is generated especially for you and can be viewed only by you. However, you can easily share results and/or publications with your peers via email or by using the social media buttons located on the Dashboard.Once you have accessed the Dashboard for the first time, you'll receive monthly email updates notifying you when new metrics are available. New publications will be added automatically as you publish them and your Dashboard will be updated as new data become available. You can check metrics on your Dashboard at any time.
It is not always necessary to produce tables ranking a journal against other journals to measure their performance. There are many other ways of assessing the development of a journal by tracking its own performance patterns over time. Scopus is invaluable for such analyses, supporting citation analysis from 1996 over any number of years that is appropriate to the question being addressed. Our editors have complimentary access to Scopus via Elsevier Editorial System (EES).
Scopus Journal Analyzer
The Scopus Journal Analyzer provides you with a quick, easy and transparent view of journal performance, including two journal metrics - SJR and SNIP also available at www.journalmetrics.com. It is using citations from nearly 19,500 titles from 5,000 international publishers Scopus Journal Analyzer gives access to an objective overview of the journal landscape going back to 1996.
- Turn a laborious task into a simple comparison – gain more time to analyze the results and make clear, informed decisions. analyze and manage journals more effectively
- learn from the competitive landscape
- identify new growth areas
- set out a strategy to improve performance.
The Scopus Journal Analyzer's unique functionality provides you with six graphical representations of the journals:
SCImago Journal Rank (SJR) is a measure of the scientific prestige of scholarly sources: value of weighted citations per document. A source transfers its own 'prestige', or status, to another source through the act of citing it. A citation from a source with a relatively high SJR is worth more than a citation from a source with a lower SJR. For more information on SJR click here.
Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. For more information on SNIP click here.
Citations displays the total number of citations the selected journals receive over the course of each year.
Documents shows the number of articles published by each journal over time.
% Not Cited provides the percentage of all documents that did not receive citations in that year.
Percent Reviews provides the percentage of documents of a review type.
Article and issue types
Evaluating differences between average citations per item type, and/or per article in distinct issue types, may raise points for consideration when setting the future strategy of a journal.
Review articles are, on average, cited three-times more frequently than original research articles; this is illustrated in our Perspectives in Publishing paper. This is a useful benchmark for assessing the topicality of reviews published in a particular journal against an average item that it itself publishes.
Similarly, special/themed issues and supplements are often published with the aim of attracting citations at a higher rate than a regular issue.
For each journal, a particular level of citation can be assigned that indicates a 'key article'. The number of years over which incoming citations are counted, and the level at which an article begins to be considered 'key', will vary per subject area and/or journal.
The proportion of journal content that is 'key' can indicate improvements in commissioning activities, in attracting the choicest research and/or authors, or in whatever activity(ies) have been undertaken to attract such content.
Content assessment by citations counted over varying time periods can be done very flexibly using the Scopus Citation Tracker.
High quality journal content, that is useful to a scientific community and that supports the development of the field, is generally indicated by citation inflow. It follows that a low proportion of content that is not cited is desirable, and reductions in the proportion of uncited material can indicate improvements in overall journal quality.
The time after which an article is considered uncited, and the desirable level of uncited content, will vary per journal and per field.
Other Journal Metrics
Thomson Reuters publish other metrics, in addition to the Impact Factor. The Immediacy Index is a measure of the speed at which content in a particular journal is picked up and referred to, and is illustrated in the figure on the right.
The Immediacy Index of journal J in the calendar year X is the number of citations received by J in X to any item published in J in X, divided by the number of source items published in J in X.
An example follows for the fictitious Journal of Great Science:
* In year X, the Journal of Great Science received 84 citations to items published in X
* 120 source items were published in the Journal of Great Science in X
* Year X Immediacy Index for the Journal of Great Science = 84/120 = 0.700
Like the Impact Factor, the Immediacy Index can be affected by characteristics peculiar to the particular field. It will only be important for those fields in which citations start to flow in quite quickly, such as fundamental life sciences or neurosciences.
Thomson Reuters also publish the Cited Half-Life, in addition to the Impact Factor and the Immediacy Index. The Cited Half-Life is a measure of the 'archivability' of content in a particular journal, or of how long content is referred to after publication. It is illustrated in the figure above.
The Cited Half-Life of journal J in year X is the number of years after which 50% of the lifetime citations of J's content published in X have been received.
Like the Impact Factor and Immediacy Index, the Cited Half-Life can be affected by characteristics peculiar to the particular field. It will be more important for those fields in which citations start to flow in slowly after a significant lag time, such as social sciences, or mathematics and computer sciences.
Eigenfactor and Article Influence
The Eigenfactor and Article Influence are recently developed metrics based on data held in Thomson Reuters' Journal Citation Reports. They are freely available at www.eigenfactor.org.
The Eigenfactor of journal J in year X is defined as the percentage of weighted citations received by J in X to any item published in (X-1), (X-2), (X-3), (X-4), or (X-5), out of the total citations received by all journals in the dataset. Only citations received from a journal other than J are counted. The Eigenfactor is not corrected by article count, and so is a measure of the influence of a particular journal; bigger and highly-cited journals will tend to be ranked highly.
As with the SCImago Journal Rank, each (non-self) citation is assigned a value greater or less than one based on the Eigenfactor of the citing journal. The weighting to be applied is calculated iteratively from an arbitrary constant. See detailed methodology.
Article Influence is calculated by dividing the Eigenfactor by the percentage of all articles recorded in the Journal Citation Reports that were published in J. Article Influence is therefore conceptually similar to the Impact Factor and SCImago Journal Rank.
The Journal Impact Factor is published each year by Thomson Reuters. It measures the number of times an average paper in a particular journal has been referred to.
The Impact Factor of journal J in the calendar year X is the number of citations received by J in X to any item published in J in (X-1) or (X-2), divided by the number of source items published in J in (X-1) or (X-2).
'Source items' is the term used to refer to full papers: original research articles, reviews, full length proceedings papers, rapid or short communications, and so on. Non-source items, such as editorials, short meeting abstracts, and errata, are not counted in the denominator although any citations they might receive will be included in the numerator.
An example follows for the fictitious Journal of Great Science:
* In year X, the Journal of Great Science received 152 citations to items published in (X-1) and 183 citations to items published in (X-2). Total citations for Impact Factor calculation = 335.
* 123 source items were published in the Journal of Great Science in (X-1), and 108 in (X-2). Total source items for Impact Factor calculation = 231.
* Year X Impact Factor for the Journal of Great Science = 335/231 = 1.450.
Impact Factor can be affected by subject field, number of authors, content type, and the size of the journal; this is described in our Perspectives in Publishing paper, from which the figure above, showing a generalized citation curve and how Thomson Reuters' metrics relate to it, is taken.
The Impact Factor can be a useful way of comparing citability of journals, if the comparison is limited to a given subject field and the type of journals being compared (review, original research, letters) are similar. The absolute Impact Factor is of limited use, without that of other journals in the field against which to judge it.
You can find the most recent Impact Factors of our individual journals on their homepages.
Five-year Impact Factor
The five-year Impact Factor is similar in nature to the regular 'two-year' Impact Factor, but instead of counting citations in a given year to the previous two years and dividing by source items in these years, citations are counted in a given year to the previous five years and again divided by the source items published in the previous five years.
An example for Tetrahedron Letters:
2-yr Impact Factor: 9621 citations in 2010 to items published in 2008 and 2009 / 3675 items published in 2008 and 2009 = 2.618
5-yr Impact Factor: 23846 citations in 2010 to items published in 2005, 2006, 2007, 2008, and 2009 / 9602 items published in 2005-2009 = 2.483
A base of five years may be more appropriate for journals in certain fields because the body of citations may not be large enough to make reasonable comparisons or it may take longer than two years to disseminate and respond to published works. The two measures differ also in the amount of variability between years. The two-year Impact Factor can fluctuate by around 20% in value each year, whereas the five-year measure, while still showing changes over time, presents a much smoother variation.
The exact number in the metric may differ, but often this difference disappears when one looks at the relative position of a journal within its subject field. If the whole field evolves slower and benefits from a 5-yr measure, the rankings will not differ much.
Journals are often ranked by Impact Factor in an appropriate Thomson Reuters subject category. As there are now two published Impact Factors, this rank may be different when using a two- or a five-year Impact Factor and care is needed when assessing these ranked lists to understand which metric is being utilized. In addition, journals can be categorized in multiple subject categories which will cause their rank to be different and consequently a rank should always be in context to the subject category being utilized.
Scopus Journal Metrics
Impact per Publication
The Impact per Publication (IPP) measures the ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3). The IPP metric uses a citation window of three years which is considered to be the optimal time period to accurately measure citations in most subject fields. Taking into account the same peer-reviewed scholarly papers in both the numerator and denominator of the equation provides a fair impact measurement of the journal and diminishes the chance of manipulation.
The IPP is comparable to the Impact Factor, but uses a citation window of three years (as opposed to two years for the Impact Factor) and uses peer-reviewed document types only (articles, conference papers and review papers) in the calculation of the metric (as opposed to using citations to all documents in the nominator and the number of "citable" documents only in the denominator for the Impact Factor). Also, Scopus' much broader coverage means that IPP is available for many more journals than the Impact Factor.
The IPP is not normalized for the subject field and therefore gives a raw indication of the average number of citation a publication published in the journal will likely receive. When normalized for the citation rate in the subject field, the raw Impact per Publication becomes the Source Normalized Impact per Paper (SNIP). Note that in the context of the calculation of SNIP, the raw Impact per Publication is usually referred to as RIP. Like SNIP, the raw Impact per Publication metric was also developed by Leiden University's Centre for Science & Technology Studies (CWTS). See detailed methodology.
Source Normalized Impact per Paper
An indicator called SNIP (Source Normalized Impact per Paper) was developed by Henk Moed who was then part of the CWTS bibliometrics group at the University of Leiden. The pre-calculated metric was added to the Scopus Journal Analyzer in early 2010 and is freely available at www.journalmetrics.com.
SNIP is a novel approach and as such provides a novel bibliometric perspective. The key idea behind SNIP is that it corrects for subject-specific characteristics of the field someone is publishing in. This means that, contrary to the Impact Factor, SNIP numbers can be compared for any two journals, regardless of the field they are in.
Additional points include:
- Freely available on the web.
- Calculated for more journals than the Impact Factor.
SNIP is defined as the ratio of the raw Impact per Publicatiopn divided by the Relative Database Citation Potential. The raw Impact per Publication is the same as IPP described above: the ratio of citations in year X to peer-reviewed papers published in years X-1, X-2 and X-3 divided by the number of peer-reviewed papers published in years X-1, X-2 and X-3. As such it is conceptually similar to the Impact Factor. For example, the 2010 SNIP is calculated by dividing citations made in 2010 to peer-reviewed papers published in 2007, 2008 and 2009, by the number of peer-reviewed papers published in 2007, 2008 and 2009. The resulting ratio is then divided by the Relative Database Citation Potential. See detailed methodology.
SCImago Journal Rank
The SCImago Journal Rank (SJR) was developed by SCImago, a research group from the University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares, dedicated to information analysis, representation and retrieval by means of visualization techniques.
SCImago Journal Rank is based on citation data of the more than 20,000 peer-reviewed journals indexed by Scopus from 1996 onwards, and is freely available at www.journalmetrics.com.
The central idea of SJR is that citations are weighted, depending on the rank of the citing journal. A citation coming from an important journal will count as more than one citation, a citation coming from a less important journal will count as less than one citation.
The SCImago Journal Rank of journal J in year X is the number of weighted citations received by J in X to any item published in J in (X-1), (X-2) or (X-3), divided by the total number of articles and reviews published in (X-1), (X-2) or (X-3).
SCImago Journal Rank is a measure of the number of times an average paper in a particular journal is referred to, and as such is conceptually similar to the Impact Factor. A major difference is that instead of each citation being counted as one, as with the Impact Factor, the SCImago Journal Rank assigns each citation a value greater or less than one based on the rank of the citing journal. The weighting is calculated iteratively from an arbitrary constant using a three-year window of measurement. See detailed methodology.
Additional information on SJR and a data file containing all SJR values can be found at www.journalmetrics.com.
The h-index was proposed in 2005 by Professor Jorge Hirsch, as a metric for evaluating individual scientists; the paper is freely available.
The h-index rates a scientist's performance based on his or her career publications, as measured by the lifetime number of citations each article receives. The measurement is dependent on both quantity (number of publications) and quality (number of citations) of an academic's publications.
If you list all of a scientist's publications in descending order of the number of citations received to date, their h-index is the highest number of their articles, h, that have each received at least h citations. So, their h-index is 10 if 10 articles have each received at least 10 citations; their h-index is 81 if 81 articles have each received at least 81 citations. Their h-index is one if all of their articles have each received one citation, but also if only one of all their articles has received any citations.
However, the h-index can be applied to any group of articles, including those published in a particular journal in any given year.
In the fictitious example below, the 80 articles published in a journal in a given year have been ranked by lifetime citations. The h-index of this journal for this year's content is 22, since 22 articles have each received at least 22 citations.
|Citations to date||72||63||59||…||24||24||21||…||0||0||0|
Article-Level Metrics (ALMs) are a new approach to quantifying the reach and impact of published research. ALMs seek to incorporate data from new sources (such as social media mentions) along with traditional measures (such as citations) to present a richer picture of how an individual article is being discussed, shared, and used.
Citations are a well-established measure of research impact as a citation can mean recognition or validation of one's research by others. In this respect it can be important for authors to keep track of citations to their papers, and Elsevier's service CiteAlert helps with this. It is a weekly service that automatically notifies authors by email when their work is referenced by an article in an Elsevier-published journal. It is a unique service from Elsevier; we are the only STM publisher offering a service like this using Scopus data. Notifications are sent for any cited article (highly cited or not), but exclude author self-citations.
Citations can take years to accrue, so a more immediate way to track the reach of a paper is to look into how the article is being viewed on the online platform, and downloaded by users. Elsevier's Article Usage Alerts aim to do this by sending corresponding authors of articles published in many of our participating journals a quarterly email linking to a dashboard of ScienceDirect usage data for the first year after publication of their article.
The dashboard shows both monthly and cumulative usage since publication. In addition, Article Usage Alert provides authors with easy sharing options for promoting their article to peers via social media, hence increasing its visibility and reach.
Readership is a metric which extends usage stats such as downloads by adding information on reader intent. Mendeley is the leading source of readership information, drawing from its global community of millions of researchers, and is relied upon by companies such as Altmetric.com, Impact Story, Plum Analytics, Kudos, and many independent publishers to show research impact and reader engagement.
The Scopus Mendeley Readership app appears for all documents that have been saved at least once on Mendeley. The total number of readers who have added the paper is presented, along with three pieces of demographic information: the top three countries, subject areas and career status of readers. A link is provided to the article in Mendeley, and all Scopus and Sciencedirect documents feature an "Import to Mendeley" function. Learn more.
Another way to measure impact promptly after publication is to track the online attention received by a paper. Elsevier is partnering with Altmetric.com, which aims to do this by capturing online mentions in social media and other web-based data such as bookmarks, tweets, Facebook posts, news, and scientific blogs. Altmetric.com has been integrated into Scopus as a powerful 3rd party web application that runs within the sidebar of Scopus abstract pages. It's a quick and easy way to see all of the social or mainstream media mentions gathered for a particular paper as well as saved counts on popular reference managers. The Altmetric.com application appears in the sidebar when there is data available for the article being viewed. Learn more.
On ScienceDirect various journals also show the Altmetric.com application, and a top 10 of the most popular articles according to Altmetric.com is displayed on the journal homepages of many journals.