Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

“Predatory” vs trustworthy journals: What do they mean for the integrity of science?

August 15, 2018

By Sacha Boucherie

predatory journals

An Elsevier leader answers questions about the practices of predatory journals and the role of trustworthy publishers

© istock.com/serdjophoto Hundreds of thousands of researchers worldwide have published in so-called predatory journals in recent years. Among them are researchers from renowned research institutes and universities, employees of federal authorities – even a Nobel laureate. These revelations come from a recent study(opens in new tab/window) led by a group of journalists and data experts from more than a dozen leading media outlets across Europe, Asia and the United States and facilitated by the International Consortium of Investigative Journalists (ICIJ)(opens in new tab/window).

The group analyzed 175,000 scientific articles published between 2012 and 2018 by five of the world’s largest pseudo-scientific platforms. Their dossier and corresponding database, now in the public domain, have reignited a global debate about the integrity of science.

Trusted research information remains an important cornerstone of the scientific process and contributes to the progress of science and humanity. With growing concerns about “predatory” publishing, what role do trustworthy publishers play in ensuring trust and integrity?

I interviewed Dr. Philippe Terheggen(opens in new tab/window), Managing Director of STM Journals at Elsevier, delving into the most common issues and questions the research community and general public raise about “predatory” journals.

What is a “predatory” journal?

There is no agreed-upon definition(opens in new tab/window), and perceptions of what the term “predatory” means vary widely. A good starting point from Shamseer et al(opens in new tab/window) is that predatory journals “actively solicit manuscripts and charge publication fees without providing robust peer review and editorial services.”

Frequently, authors publishing in predatory journals do not receive the services or benefit from the attributes of the journal they are seeking and believe they have paid for. Such deceptions are a hallmark of predatory journals and commonly include promising (non-existent) peer review, fake impact factors, fake editors and even misleading journal names uncannily similar to well-known, legitimate journals.

At their core, trustworthy publishers are deeply committed to, and make significant investments in, disseminating research information that can be trusted, is relevant for research, and is presented in ways to serve efficient knowledge transfer, which in turn supports quality research. This is in direct contrast to predatory publishers, who do not show interest or invest in the integrity and relevancy of the published record in support of advancing research.

What are contributing factors to the proliferation of predatory journals? What role does open access play?

For many academics, career progression depends on the research papers they publish. In this context, an open access (pay-to-publish) model can be misused; it provides an opportunity to exploit the willingness of authors to pay to publish, without providing the full editorial and publication services of a reputable publisher, including a thorough peer review process and commitment to long-term archiving.

Technology can facilitate illegitimate journals because setting up a website, spamming thousands of potential authors and receiving electronic payments is far quicker than setting up and selling electronic journal subscriptions.

Decisions about journal subscriptions have traditionally been made by librarians, who bring to this responsibility considerable expertise and experience in assessing journals and publishers. There is much more variation in authors’ ability to assess an unfamiliar journal: inexperienced researchers or those in less mature institutes may be more likely to fall prey to ‘’predatory” journals. Misleading practices such as bogus impact factors, copying the names of long-standing journals or misrepresenting the location of the publisher can make it difficult for even experienced authors to judge a new journal.

It goes without saying that an open access model is employed by increasing numbers of reputable journals, whether established titles or new launches, and should not in itself be perceived as an indicator of predatory publishing. Next to trustworthy full gold open access titles, hybrid journals continue to play an important role in providing a means for authors to publish in established, trusted journals under the open access model.

Why does peer review matter?

Peer review – the process by which experts in the field check research papers for validity, significance, originality and clarity – is the cornerstone of the scientific validation process. The fact that the essence of the process has not changed in the centuries since the first journal was published is one clear indicator of its value. Although not without challenges, many of which were raised in a 2016 survey of researchers by Elsevier, peer review is still viewed as the fairest way to evaluate research quality.

At its core, the peer review process separates scientific conclusions from speculation and opinion. It supports the cumulative nature of scientific knowledge, allowing researchers to build on others’ insights. Simply stated, peer review increases research quality.

Some outcomes of peer review, such as early detection of plagiarism cases and conflicts of interest, are largely invisible yet play a hugely valuable role in maintaining the integrity of the research record. Other benefits of the peer review process include training and development opportunities for early career researchers and the opportunity to give and receive constructive feedback that improves the research process overall.

Though peer review has not changed in its essence since the earliest journals were published, experimentation and innovation around the process of peer review continues. Advances that contribute to transparency around peer review, such as open peer review reports, can play a useful part in distinguishing those journals offering a robust peer review process.

What does it take to maintain high quality peer review?

Maintaining a high-quality peer review process takes significant time and effort by many parties, including, of course, the reviewers themselves. Publishers support the process, working in partnership with the research community.

The peer review process starts with the establishment and subsequent rotation of editorial teams, often selected and supported by a publisher. Editors check all new submissions for basic scientific soundness; standard of English; how well the submission matches the audience of the journal and ethical requirements, such as competing interest statements. Elsevier’s editorial systems provide editors with a Crossref Similarity Check report for every submission, to support them in detecting plagiarism and duplication.

For submissions that pass this initial validation, editors carefully select appropriate reviewers (typically multiple reviews are required in order to make a publication decision), often using tools such as Elsevier’s find reviewer tool(opens in new tab/window) powered by Scopus. Finding the right reviewers with time available can in itself be a time-consuming process.

On acceptance of a review assignment, a reviewer can spend many hours assessing the paper and providing feedback to the editor and author. Having digested the feedback, the editor will make a judgment on the paper that could include revision by the author and a subsequent round of peer review by the original reviewers or new ones in some cases. The entire peer review process is usually carried out via an online editorial system provided by the publisher. Elsevier’s recent announcement of an agreement to acquire Aries Systems is a good example of publishers investing in systems to provide the best possible support for the peer review process.

Training, finding, recognizing and retaining peer reviewers are core themes that are being addressed by many in the publishing industry, including Elsevier. A recent example is Mendeley’s partnership with the Open Researcher Contributor Identification Initiative (ORCID)(opens in new tab/window) to allow users to import peer review records from their ORCID profile into their Mendeley profile by connecting their ORCID ID to their Mendeley account.

Peer review does not stop with the publication of an article. Editors and publishers can receive input from the research community on an article post-publication – for example, concerning a claimed error or ethical issue in the published research. At Elsevier, all such input is handled by editors and publishers in accordance with best practices from the Committee on Publication Ethics (COPE)(opens in new tab/window). Fair and robust investigation of a complex case can take hundreds of hours, with editors frequently requiring advice from Elsevier’s legal experts. A small percentage of papers need to be corrected or even retracted; legitimate journals do this in a transparent way that alerts readers when research turns out to be unreliable.

This commitment to the integrity of the research record is common to reputable publishers and journals and a key differentiator to predatory journals, which are unlikely to make such an investment.

How can researchers, authorities and the public distinguish between predatory and reputable journals?

Think Check Submit(opens in new tab/window) is a cross-industry initiative led by representatives from ALPSP, DOAJ, INASP, ISSN, LIBER, OASPA, STM, UKSG, and individual publishers. It’s an excellent example of how many parties, including Elsevier, are joining forces to address the issue of predatory publishing. Think Check Submit provides simple guidelines for authors to assess a journal before submitting an article. These guidelines also apply to readers looking for means to identify trusted sources. Other author education efforts exist, such as Elsevier’s Researcher Academy(opens in new tab/window), whose offerings include a module on “Finding the right journal.”

Journal and publisher brands continue to play an important part in helping authors and consumers of published research identify reputable journals. The academic standing of editorial teams, the quality of the peer review process, citation metrics related to published articles, and marketing efforts by the publisher all play key roles in the perception of a journal brand.

In recent discussions(opens in new tab/window), experts have considered the introduction of quality seals. Would quality seals help save the integrity of science?

While the appeal of a quality seal is clear, the risk remains that it would be misappropriated by predatory journals, similar to how ‘’white lists’’ can potentially be abused. We have seen evidence of misuse of other quality stamps, such as the misrepresentation of indexing service inclusion or even use of established journal titles. Other challenges could include the identification of an appropriate universally accepted awarding body, and defining criteria for inclusion. If authors would largely rely on quality seals, ‘’white lists’’ or “black lists,” they might be at a loss to assess new journals, putting these at an inherent disadvantage.

We will continue to seek solutions in collaboration with industry partners but nevertheless feel education and awareness efforts (such as Think Check Submit) continue to offer the best first line of defense.

Is the pressure to publish too high? And are there too many publications in general?

For many researchers, publication remains strongly linked to promotion and funding opportunities. We are committed to quality and efficiency in the research publication process and believe that in some cases communication of research outcomes could be achieved with fewer articles. High and increasing rejection rates and, in some cases, low online usage of individual articles hint that all parties involved in the communication of scientific research can work together towards further increasing quality and relevancy. However, we believe and continue to demonstrate that the body of published research can grow in terms of volume and (citation) quality in line with increasing numbers of researchers worldwide.

The pressure on researchers to publish and influence the direction of research internationally is something we recognize as a reality. As partners of the scientific community, our mission therefore is to optimize efficiency in the publication process to the benefit of authors, reviewers and editors alike. For example, how do we leverage the full value of peer review comments when an article is rejected and resubmitted to another journal? How do we make it as easy as possible for an author to find the right home for their research without having to continually resubmit their article? How can we minimize author effort in formatting articles to required journal standards before the article is accepted for publication? These are just some of the questions we are continually working to solve in the form of initiatives such as the Article Transfer Service and Your Paper Your Way. Coupled with the right technology platform, these efforts deliver efficient knowledge transfer of trusted information.

What does Elsevier recommend to contain the spread of predatory journals? Which parties need to work jointly on a solution?

Education and building awareness remain important themes and continue to receive our commitment in collaboration with other industry partners. Research institutes and research funders undoubtedly have key roles to play. Further collaboration and joint commitment to an evaluation system that focuses on quality above quantity of output would significantly decrease the incentives for predatory journals to operate. We are seeing positive progress in this regard with funders and research institutes taking a quality-over-quantity approach to research output evaluation. Elsevier remains committed to supporting such efforts, providing a selection of transparent citation-based metrics and working with a range of partners on mechanisms to assess research performance.

Sources and related resources

Contributor

SB

Sacha Boucherie

Elsevier

+31 20 485 3564 

E-mail Sacha Boucherie