Elsevier Connect

Publishing Ethics

Citation ethics for editors

How Impact Factor engineering can damage a journal’s reputation

[note color="#f1f9fc" position="left" width=800 margin=10]

[caption id="attachment_13256" align="alignleft" width="110"]Sarah Huggett

Sarah Huggett

[/caption] 

The Author

 

Sarah Huggett is Publishing Information Manager for Research & Academic Relations at Elsevier. As part of the Scientometrics & Market Analysis team, she provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M.Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, she moved to the UK to teach French at Oxford University before joining Elsevier in 2006.

A version of this article appeared in Editors Update.

Science has been accelerating at a very fast rate, resulting in “information overload” and more recently “filter failure.” There are now more researchers and more papers than ever, which has led to the heightened importance of bibliometric measures. Bibliometrics as a field is a fairly new discipline, but it has seen an impressive growth in recent years due to advances in computation and data storage, which have improved the accessibility and ease of the use of bibliometric measures (for instance through interfaces such as SciVerse Scopus and SciVal).

Bibliometrics are being increasingly used as a way to systematically compare diverse entities (authors, research groups, institutions, cities, countries, disciplines, articles, journals, etc.) in a variety of contexts. These include an author deciding where to publish, a librarian working on changes in their library’s holdings, a policymaker planning funding budgets, a research manager putting together a research group, and a publisher or editor benchmarking their journal to competitors.

Enter the Impact Factor

In this perspective, journal metrics can play an important role for editors and we know it’s a topic of interest because of the high attendance at our webinar on the subject earlier this year. There are many different metrics available, and we always recommend looking at a variety of indicators to yield a bibliometric picture that is as thorough as possible, providing insights on the diverse strengths and weaknesses of any given journal. (An article by Dr. Mayur Amin and Dr. Michael Mabe in Perspectives in Publishing, “Impact Factors: use and abuse,” explores this issue.)

 [note color="#f1f9fc" position="left" width=400 margin=10] Bibliometrics — the science of using statistics and mathematical methods to analyze bibliographic data and references.

Bibliographic data — data about published literature, including the number of publications published and the number of citations received by, for instance, an author, institution, subject area, country, or journal.

Journal metrics — Various metrics are used to evaluate journals. For more about those metrics, see this explanation on Editors’ Home.

However, we are well aware that one metric in particular seems to be considered especially important by most editors: the Impact Factor. Opinions on the Impact Factor are divided, but it has now long been used as a prime measure in journal evaluation, and many editors see it as part of their editorial duty to try to raise the Impact Factor of their journal. Frank Thorsten Krell examines that issue in his 2010 article in Learned Publishing titled  “Should editors influence journal impact factors?”

An editor’s dilemma

There are various techniques by which this can be attempted, some more ethical than others, and it is an editor’s responsibility to stay within the bounds of ethical behavior. It might be tempting to try to improve one’s journal’s Impact Factor ranking at all costs, but Impact Factors are only as meaningful as the data that feed into them. This issue is explored in depth in an article by Dr. J Reedijk and Dr. Henk Moed in their article in the Journal of Documentation titled “Is the impact of journal impact factors decreasing?”

If an Impact Factor is exceedingly inflated as a result of a high proportion of gratuitous self-citations, it will not take long for the community to identify this (especially in an online age of easily accessible citation data). This realization can be damaging to the reputation of a journal and its editors, and might lead to a loss of quality manuscript submissions to the journal, which in turn is likely to affect the journal’s future impact. The results of a recent surveydraw attention to the frequency of one particularly unethical editorial activity in business journals: coercive citation requests – editors demanding authors cite their journal as a condition of manuscript acceptance. The results were published in Science in an article called “Coercive Citation in Academic Publishing.”

 [note color="#f1f9fc" position="left" width=800 margin=10] 

Elsevier’s philosophy on the Impact Factor

Elsevier uses the Impact Factor (IF) as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.

This issue has already received some attention from the editorial community following an editorial in the July 2012 issue of the Journal of the American Society for Information Science and Technology by Editor-in-Chief Blaise Cronin.

Although some Elsevier journals were highlighted in the study, our analysis of 2010 citations to 2008-09 scholarly papers (replicating the 2010 Impact Factor window using Scopus data) showed that half of all Elsevier journals have less than 10 percent journal self-citations, and 80% of them have less than 20 percent journal self-citations. This can be attributed to the strong work ethic of the editors who work with us, and it is demonstrated through our philosophy on the Impact Factor and policy on journal self-citations: Elsevier has a firm position against any “Impact Factor engineering” practices.

So, what is the ethically acceptable level of journal self-citations?

There are probably as many answers to this question as there are journals. Journal self-citation rates vary between scientific fields, and a highly specialized journal is likely to have a larger proportion of journal self-citations than a journal of broader scope. A new journal is also prone to a higher journal self-citation rate as it needs time to grow in awareness amongst the relevant scholarly communities.

 [note color="#f1f9fc" position="left" width=400 margin=10] 

Elsevier’s policy on journal self-citations

An editor should never conduct any practice that obliges authors to cite his or her journal either as an implied or explicit condition of acceptance for publication. Any recommendation regarding articles to be cited in a paper should be made on the basis of direct relevance to the author’s article, with the objective of improving the final published research. Editors should direct authors to relevant literature as part of the peer review process; however, this should never extend to blanket instructions to cite individual journals. …

Part of your role as Editor is to try to increase the quality and usefulness of the journal. Attracting high quality articles from areas that are topical is likely the best approach. Review articles tend to be more highly cited than original research, and letters to the Editor and editorials can be beneficial. However, practices that “engineer” citation performance for its own sake, such as forced self-citation are neither acceptable nor supported by Elsevier.

As mentioned in the Thomson Reuters report “Journal Self-Citation in the Journal Citation Reports”: ”A relatively high self-citation rate can be due to several factors. It may arise from a journal’s having a novel or highly specific topic for which it provides a unique publication venue. A high self-citation rate may also result from the journal having few incoming citations from other sources. Journal self-citation might also be affected by sociological factors in the practice of citation. Researchers will cite journals of which they are most aware; this is roughly the same population of journals to which they will consider sending their own papers for review and publication. It is also possible that self-citation derives from an editorial practice of the journal, resulting in a distorted view of the journal’s participation in the literature.”

There are various ethical ways editors can try to improve the Impact Factor of their journal. At Elsevier, publishers work with editors to provide insights as to the relative bibliometric performance of keywords, journal issues, article types, authors, institutes, countries and other factors, all of which can be used to inform editorial strategy. Journals may also  publish additional content – such as official society communications, guidelines, taxonomies, methodologies, special issues on topical subjects, invited content from leading figures in the field and interesting debates on currently relevant themes – which can help to increase the Impact Factor and other citation metrics.

A high quality journal targeted at the right audience should enjoy a respectable Impact Factor in its field, which should be a sign of its value rather being an end in itself. Editors often ask me how they can raise their journal’s Impact Factor, but the truth is that as they already work towards improving the quality and relevance of their journal, they are likely to reap rewards in many areas, including an increasing Impact Factor. And this is the way it should be: a higher Impact Factor should reflect a genuine improvement in a journal, not a meaningless game that reduces the usefulness of available bibliometric measures.



comments powered by Disqus