Information overload and filter failure paving the way for bibliometrics[note color="#f1f9fc" position="right" width=400 margin=10 align="alignright"]
GlossaryBibliometrics — the science of using statistics and mathematical methods to analyze bibliographic data and references.
Bibliographic data — data about published literature, including the number of publications and citations of journals, authors, institutions, subject areas and countries.[/note] The recent acceleration of science and scholarly communications has resulted in “information overload” and more recently “filter failure.” There are now more researchers and more papers than ever, which has led to the heightened importance of bibliometric and other systematic measures of science. Bibliometrics as a research field is a fairly new discipline, but it has seen an impressive growth in recent years due to advances in computation and data storage as well as a growing need for academic accountability.
Bibliometrics and other measures are being increasingly used as a way to systematically compare diverse entities (authors, research groups, institutions, cities, countries, disciplines, articles, journals, etc.) in a variety of contexts. These include an author deciding where to publish, a librarian working on changes in their library’s holdings, a policymaker planning funding budgets, a research manager putting together a research group, and a publisher or editor benchmarking their publication to those of competitors.
The multidisciplinary landscape of journal evaluationMore sophisticated tools and methods enable people to make more meaningful comparisons of scholarly journals based on article citations and other data. While for a long time the journal evaluation landscape was dominated by a scarcity of measures, there are now many journal metrics available, providing a varied and more integral picture of a journal’s impact. Different indicators measure different aspects of journal performance, meaning it is now possible to reflect on various aspects of a journal’s achievements, to the benefits of many in the academic and scholarly world:
- Librarians may take into account a variety of measures when they need to make informed decisions about content collection. Various criteria may have different degrees of importance for different libraries or institutes depending on their focus.
- Journal editors each have their own strategies for journal success, often stemming from a specific mix of strategic priorities. Their decisions may be informed by the newly available variety of indicators, which can help track progress along the specific path each has decided to take.
- Researchers may consider various aspects of journal performance when considering where to publish their research. The relatively new wealth of available measures provides authors with information on several criteria that can guide their choice.
Two metrics for two purposes
Scopus features two bibliometrics indicators to measure a journal's impact: SNIP (Source Normalised Impact per Paper) and SJR (SCImago Journal Rank). These metrics use citation data to reveal two different aspects of a journal's impact:
- SNIP takes into account the field in which a journal operates, smoothing differences between field-specific properties such as the number of citations per paper, the amount of indexed literature, and the speed of the publication process.
- SJR takes into account the prestige of the citing journal. Citations are weighted depending on whether they come from a journal with a high or low SJR.
[note color="#f1f9fc" position="center" width=800 margin=10]
SNIP: How does it work?SNIP was developed by Dr. Henk Moed, who was then part of the bibliometrics group at the Centre for Science and Technology Studies (CWTS) of Leiden University. It is a ratio with a numerator and a denominator.
SNIP's numerator gives a journal's raw impact per paper (RIP). This is simply the average number of citations received in a particular year by papers published in the journal during the three preceding years.
SNIP's denominator, the Database Citation Potential (DCP), is calculated as follows. There are large differences in the frequency at which authors cite papers between various scientific subfields. Therefore, for each journal, an indicator is calculated of the citation potential in the subject field it covers. This citation potential is included in SNIP's denominator.
SNIP = RIP ÷ DCP[/note]
As of October, the following changes, detailed here, apply to SNIP:
- A different averaging procedure is used in the calculation of the denominator to reduce the impact of outliers.
- A correction factor is introduced to the weighting of citations from journals with low numbers of references.
- The new calculation results in a SNIP average score for all journals in Scopus to approximately equal one.
[note color="#f1f9fc" position="center" width=800 margin=10 align="alignnone"]
SJR: How does it work?SJR was developed by the SCImago Research Group from the University of Granada in Spain, dedicated to information analysis and representation and retrieval of information using visualization techniques.
SJR looks at the prestige of a journal, as indicated by considering the sources of citations to it, rather than its popularity as measured simply by counting all citations equally. Each citation received by a journal is assigned a weight based on the SJR of the citing journal. A citation from a journal with a high SJR value is worth more than a citation from a journal with a low SJR value.[/note]
As of October, the following changes, detailed here, apply to SJR:
- A heavier weighting of the more prestigious citations that come from within or closely related fields.
- A compensating factor to overcome the decrease of prestige scores over time as the number of journals increases.
- A more readily understandable scoring scale with an average of one.