Below you will find definitions of bibliometric indicators used to measure the influence of journals and our thoughts on their uses and limitations. These metrics help you to track and evaluate the development of your journal.
We also touch on how you can monitor citation trends using tools such as the Scopus Journal Analyzer, as well as how our Journal Reports can help to highlight other achievements.
On the journal homepage the Journal Insights pod gives a powerful visualization of five years of historical data for 8 metrics available within 3 clusters: Impact, Speed and Authors.
The Journal Impact Factor is published each year by Thomson Reuters. It measures the number of times an average paper in a particular journal has been referred to.
The Impact Factor of journal J in the calendar year X is the number of citations received by J in X to any item published in J in (X-1) or (X-2), divided by the number of source items published in J in (X-1) or (X-2).
'Source items' is the term used to refer to full papers: original research articles, reviews, full length proceedings papers, rapid or short communications, and so on. Non-source items, such as editorials, short meeting abstracts, and errata, are not counted in the denominator although any citations they might receive are included in the numerator.
An example follows for the fictitious Journal of Great Science:
- In year X, the Journal of Great Science received 152 citations to items published in (X-1) and 183 citations to items published in (X-2). Total citations for Impact Factor calculation = 335
- 123 source items were published in the Journal of Great Science in (X-1), and 108 in (X-2). Total source items for Impact Factor calculation = 231
- Year X Impact Factor for the Journal of Great Science = 335/231 = 1.450.
The Impact Factor can be affected by subject field, number of authors, content type, and the size of the journal; this is described in our Perspectives in Publishing paper, from which the figure above, showing a generalized citation curve and how the metrics relate to it, is taken.
The Impact Factor can be a useful way of comparing citability of journals, if the comparison is limited to a given subject field and the type of journals being compared (review, original research, letters) are similar. The absolute Impact Factor is of limited use, without that of other journals in the field against which to judge it.
You can find the most recent Impact Factors of our individual journals on their homepages.
Five-year Impact Factor
The five-year Impact Factor is similar in nature to the regular ‘two-year’ Impact Factor, but instead of counting citations in a given year to the previous two years and dividing by source items in these years, citations are counted in a given year to the previous five years and again divided by the source items published in the previous five years.
An example for Tetrahedron Letters:
2-yr Impact Factor: 8331 citations in 2012 to items published in 2010 and 2011 / 3475 items published in 2010 and 2011 = 2.397
5-yr Impact Factor: 21699 citations in 2012 to items published in 2007, 2008, 2009, 2010, and 2011 / 9132 items published in 2007, 2008, 2009, 2010, and 2011 = 2.376
A base of five years may be more appropriate for journals in certain fields because the body of citations may not be large enough to make reasonable comparisons or it may take longer than two years to disseminate and respond to published works. The two measures differ also in the amount of variability between years. The two-year Impact Factor can fluctuate by around 20% in value each year, whereas the five-year measure, while still showing changes over time, presents a much smoother variation.
The exact number in the metric may differ, but often this difference disappears when one looks at the relative position of a journal within its subject field. If the whole field evolves slower and benefits from a 5-yr measure, the rankings will not differ much.
Journals are often ranked by Impact Factor in an appropriate Thomson Reuters subject category. As there are now two published Impact Factors, this rank may be different when using a two- or a five-year Impact Factor and care is needed when assessing these ranked lists to understand which metric is being utilized. In addition, journals can be categorized in multiple subject categories which will cause their rank to be different and consequently a rank should always be in context to the subject category being utilized.
Thomson Reuters publish other metrics, in addition to the Impact Factor. The Immediacy Index is a measure of the speed at which content in a particular journal is picked up and referred to, and is illustrated in the figure below.
The Immediacy Index of journal J in the calendar year X is the number of citations received by J in X to any item published in J in X, divided by the number of source items published in J in X.
An example follows for the fictitious Journal of Great Science:
- In year X, the Journal of Great Science received 84 citations to items published in X
- 120 source items were published in the Journal of Great Science in X
- Year X Immediacy Index for the Journal of Great Science = 84/120 = 0.700
Like the Impact Factor, the Immediacy Index can be affected by characteristics peculiar to the particular field. It will only be important for those fields in which citations start to flow in quite quickly, such as fundamental life sciences or neurosciences.
Thomson Reuters also publish the Cited Half-Life, in addition to the Impact Factor and the Immediacy Index. The Cited Half-Life is a measure of the ‘archivability’ of content in a particular journal, or of how long content is referred to after publication. It is illustrated in the figure above.
The Cited Half-Life of journal J in year X is the number of years after which 50% of the lifetime citations of J’s content published in X have been received.
Like the Impact Factor and Immediacy Index, the Cited Half-Life can be affected by characteristics peculiar to the particular field. It will be more important for those fields in which citations start to flow in slowly after a significant lag time, such as social sciences, or mathematics and computer sciences.
Eigenfactor and Article Influence
The Eigenfactor and Article Influence are recently developed metrics based on data held in Thomson Reuters’ Journal Citation Reports. They are freely available at www.eigenfactor.org.
The Eigenfactor of journal J in year X is defined as the percentage of weighted citations received by J in X to any item published in (X-1), (X-2), (X-3), (X-4), or (X-5), out of the total citations received by all journals in the dataset. Only citations received from a journal other than J are counted. The Eigenfactor is not corrected by article count, and so is a measure of the influence of a particular journal; bigger and highly-cited journals will tend to be ranked highly.
As with the SCImago Journal Rank, each (non-self) citation is assigned a value greater or less than one based on the Eigenfactor of the citing journal. The weighting to be applied is calculated iteratively from an arbitrary constant. See detailed methodology.
Article Influence is calculated by dividing the Eigenfactor by the percentage of all articles recorded in the Journal Citation Reports that were published in J. Article Influence is therefore conceptually similar to the Impact Factor and SCImago Journal Rank.
Metrics in Scopus
Source Normalized Impact per Paper
An indicator called SNIP (Source Normalized Impact per Paper) was developed by Henk Moed who was then part of the CWTS bibliometrics group at the University of Leiden. The pre-calculated metric was added to the Scopus Journal Analyzer in early 2010 and is freely available at www.journalmetrics.com.
SNIP is a novel approach and as such provides a novel bibliometric perspective. The key idea behind SNIP is that it corrects for subject-specific characteristics of the field of publication by taking into account the number of citations per paper, the amount of indexed literature, and the speed of the publication process. This means that SNIP numbers can be compared for any two journals, regardless of the field they are in.
Additional points include:
- Freely available on the web at www.journalmetrics.com
- Use of a three year window
- Article type consistency: only citations to and from scholarly papers are considered.
SNIP is defined as the ratio of the Raw Impact per Paper divided by the Database Citation Potential. The Raw Impact per Paper is the ratio of citations in year X to scholarly papers published in years X-1, X-2, and X-3 divided by the number of scholarly papers published in years X-1, X-2, and X-3, this ratio being in turn divided by the Relative Database Citation Potential. See detailed methodology.
SCImago Journal Rank
The SCImago Journal Rank (SJR) was developed by SCImago, a research group from the University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares, dedicated to information analysis, representation and retrieval by means of visualization techniques.
The central idea of SJR is that citations are weighted, depending on the SJR of the citing journal. A citation from a journal with a high SJR value is worth more than a citation from a journal with a low SJR value.
Additional points include:
- Freely available on the web at www.journalmetrics.com
- Use of a three year window
- Article type consistency: only citations to and from scholarly papers are considered
The SJR in year X is the number of weighted citations received in X to scholarly papers published in X-1, X-2, or X-3, divided by the total number of scholarly papers published in X-1, X-2, or X-3. The weighting is calculated iteratively from an arbitrary constant. See detailed methodology.
The h-index was proposed in 2005 by Professor Jorge Hirsch, as a metric for evaluating individual scientists; the paper is freely available.
The h-index rates a scientist's performance based on his or her career publications, as measured by the lifetime number of citations each article receives. The measurement is dependent on both number of publications and number of citations to these publications.
If you list all of an academic's publications in descending order of the number of citations received to date, their h-index is the highest number of their articles, h, that have each received at least h citations. So, their h-index is 10 if 10 articles have each received at least 10 citations; their h-index is 81 if 81 articles have each received at least 81 citations. Their h-index is one if all of their articles have each received one citation, but also if only one of all their articles has received only one citation.
The h-index can be applied to any group of articles, including those published in a particular journal in any given year.
In the fictitious example below, the 80 articles published in a journal in a given year have been ranked by lifetime citations. The h-index of this journal for this year’s content is 22, since 22 articles have each received at least 22 citations.
|Citations to date||72||63||59||...||24||24||21||...||0||0||0|
AltmetricsAltmetric aims to measure the impact of papers promptly after publication by to tracking the online attention it receives. To do this it captures into its algorithm online mentions in social media and other web-based data such as bookmarks, tweets, Facebook posts, news, and scientific blogs. Altmetric has been integrated into Scopus as a powerful 3rd party web application that runs within the sidebar of Scopus article and abstract pages. It's a quick and easy way to see all of the social or mainstream media mentions gathered for a particular paper as well as saved counts on popular reference managers.
The altmetric algorithm computes an overall score taking into account volume (number of mentions), importance of the sources (news being weighted more than blogs, in turns weighted more than Tweets), and authoritativeness of the authors (a mention from an expert in the field is worth more than one from a lay person). The visual representation (altmetric "donut") shows the proportional distribution of mentions by source type, and links to the source data are available.
The Altmetric application is currently installed for all Scopus users by default (you can choose to disable it if you wish) and appears in the sidebar when there is data available for the article that currently viewed. It can usually be found underneath the "Related Documents" box on the right hand side of the screen. For more information, please click here.
It is not always necessary to produce tables ranking a journal against other journals to measure their performance. There are many other ways of assessing the development of a journal by tracking its own performance patterns over time. Scopus is invaluable for such analyses, supporting citation analysis from 1996 over any number of years that is appropriate to the question being addressed. Our editors have complimentary access to Scopus via Elsevier Editorial System (EES).
Scopus Journal Analyzer
The Scopus Journal Analyzer provides you with a quick, easy and transparent view of journal performance, including two journal metrics - SJR and SNIP also available at www.journalmetrics.com. It is using citations from nearly 19,500 titles from 5,000 international publishers Scopus Journal Analyzer gives access to an objective overview of the journal landscape going back to 1996.
- Turn a laborious task into a simple comparison – gain more time to analyze the results and make clear, informed decisions. analyze and manage journals more effectively
- learn from the competitive landscape
- identify new growth areas
- set out a strategy to improve performance.
The Scopus Journal Analyzer’s unique functionality provides you with six graphical representations of the journals:
SCImago Journal Rank (SJR) is a measure of the scientific prestige of scholarly sources: value of weighted citations per document. A source transfers its own 'prestige', or status, to another source through the act of citing it. A citation from a source with a relatively high SJR is worth more than a citation from a source with a lower SJR. For more information on SJR click here.
Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. For more information on SNIP click here.
Citations displays the total number of citations the selected journals receive over the course of each year.
Documents shows the number of articles published by each journal over time.
% Not Cited provides the percentage of all documents that did not receive citations in that year.
Percent Reviews provides the percentage of documents of a review type.
Article and issue types
Evaluating differences between average citations per item type, and/or per article in distinct issue types, may raise points for consideration when setting the future strategy of a journal.
Review articles are, on average, cited three-times more frequently than original research articles; this is illustrated in our Perspectives in Publishing paper. This is a useful benchmark for assessing the topicality of reviews published in a particular journal against an average item that it itself publishes.
Similarly, special/themed issues and supplements are often published with the aim of attracting citations at a higher rate than a regular issue.
For each journal, a particular level of citation can be assigned that indicates a ‘key article’. The number of years over which incoming citations are counted, and the level at which an article begins to be considered ‘key’, will vary per subject area and/or journal.
The proportion of journal content that is ‘key’ can indicate improvements in commissioning activities, in attracting the choicest research and/or authors, or in whatever activity(ies) have been undertaken to attract such content.
Content assessment by citations counted over varying time periods can be done very flexibly using the Scopus Citation Tracker.
High quality journal content, that is useful to a scientific community and that supports the development of the field, is generally indicated by citation inflow. It follows that a low proportion of content that is not cited is desirable, and reductions in the proportion of uncited material can indicate improvements in overall journal quality.
The time after which an article is considered uncited, and the desirable level of uncited content, will vary per journal and per field.
The scientific quality of a journal need not only be evaluated through citation counts and their analysis. Other aspects of a journal also deserve attention, and can be just as useful to highlight areas of excellence, and those that might need further attention.
A good example of the wide range of information available can be found in the Journal Report. This report is compiled on a regular basis to keep you informed and to assist you in your editorial work; it can be requested from your Publisher. The report covers the various steps of the peer review and publication process.
On the Editorial Process
The report starts with an overview of the editorial figures by listing the number of manuscripts submitted for review as well as the number on which a decision was reached. The numbers are shown per month and per handling editor. This section includes statistics on processing times: how long did it take for a submission to be reviewed, how long before revised versions were submitted, and how long before a final decision was reached?
The rejection rate is often a good indicator of journal quality. The most recent rejection rate, as well as the data for previous years, are shown in the report.
Accepted articles are tracked closely from the moment they arrive in one of our thirteen production locations. The data generated are included in the report, which gives numbers of articles by month and handling editor, with page counts and the progress of the journal’s issues.
Speed of publication is an important quality characteristic is the speed of publication and a separate section is therefore focussed on that subject.
The number of times an article has been downloaded from ScienceDirect gives an excellent and rapid indication of its importance and relevance. The report lists the 20 most-downloaded papers. The annual total figure for the journal is also available and can be compared with historical data.
On Geographical Distribution
The country of origin of a journal’s authors and readers is also thoroughly covered in the Journal Measures report, in terms of both submitted and accepted manuscripts, and downloads per country. The countries making the largest contributions, as well as those with the biggest changes, are highlighted and may influence your editorial strategy.
Editor, Author and Reviewer Feedback Programmes
Our unique Research & Academic Relations Department regularly records the opinions of editors, authors and reviewers on their experiences of working with a journal that we publish. We use this feedback as a valuable means of setting strategy to improve our journals and relationship with our editors, and we use appropriate measures as described above to indicate our success. Changes in satisfaction levels over time can indicate areas for improvement, and/or improvements in previously targeted areas.
You can request these reports from your Publisher. If your journal is not participating and you would like to be included, please contact your Publisher.