Don’t miss the peer-review forest for the trees

Elsevier discusses recent issues pertaining to open access and peer review and makes an announcement about a title referenced in the Science magazine “sting”

October wasn't a particularly good month for peer review in the media. On October 4, Science magazine published results from a 10-month long "sting." Reporter John Bohannan sent a highly flawed manuscript to 304 open access publishers, a third of which are based in India, and found that 157 publishers accepted it.

Tom RellerOne journal, Drug Invention Today, was published by Elsevier on behalf of a third-party owner that was responsible for managing editorial operations. Elsevier has since decided to cancel the hosting contract, thus no longer publishers DIT, plus two other journals from the same owner: Journal of Pharmacy Research and the International Journal of Chemical and Analytical Science.

Soon after the sting, The Economist published its perspective on the state of scholarly publishing in an October 19 article titled "Trouble at the Lab — Scientists like to think of science as self-correcting. To an alarming degree, it is not." The Economist article recounted the Science magazine sting but also explored the problems of reproducibility of scientific research, which is linked to the drawbacks of peer review, data sharing and the common practice of not publishing articles showing negative results.

The issues raised in these articles are not particularly new to the global scientific research community, but it is worth looking at the broader picture of what is being done to maintain the public's trust in science.

The Science 'sting'

The open access community's negative response to the sting article was swift, as they said the article equated open access journals with poor quality without comparing them to a control group (subscription publications). Peter Suber was one of many who wrote about what the Science article does and doesn't tell us.

While this was an investigative journalistic endeavor for the news section of Science (and not the peer-reviewed research section), opinions vary on whether it was fair. A discussion of that issue was held in a recent Q&A with a Scholarly Kitchen reporter.

Fair or not, there are some open access publishers who are not conducting peer review at an acceptable level, and some are accepting articles purely on the basis of ability to pay. Jeffrey Beall's list of predatory publishers is widely discussed within the community. In the Science article, Bohannan simply widened the scope to target publishers  that are by no means predatory yet demonstrate clear lapses in both peer review judgment and process.

Shedding light on bad practices is an important role for the media to play, and it doesn't necessarily need to be conducted with as strict a methodology and peer review as a research paper to be informative. But weakening what could have otherwise been a constructive endeavor was that Bohannan made  a sweeping negative generalization about the quality of open access peer review, suggesting his data "reveal the contours of an emerging Wild West in academic publishing."

[pullquote align="right"]The reality is that the same problems with peer review could very well occur at subscription journals.[/pullquote]The reality is that the same problems with peer review could very well occur at subscription journals. Reviewers are human, and mistakes can and do occur, and it  matters little if they are reviewing for open access or subscription journals. Thus, the implication that this will happen more easily in OA journals because there is a monetary gain for publishers in accepting more articles is the wrong implication.

Predatory OA publishers may go this route, but any publisher that takes science seriously will steer clear of this. At Elsevier, our OA journals have the same rejection levels as subscription journals, and our hybrid journals offer the choice for making an article OA only after acceptance, making the  acceptance not conditional on payment.

Reproducibility and peer review

Elsevier's initiatives to improve peer review and reproducibility

The Economist article is informative but requires context to explain why it's so negative. It was influenced by the discussions raised at the Quadrennial International Congress on Peer Review and Biomedical Publication in Chicago this past September, a conference designed to expose and discuss problems for the scientific community to address. So while The Economist accurately reported that errors and inconsistencies are sometimes found in the scientific record, to depict them as rampant was misleading.

In today's world, scientific results are made much more visible and transparent by new technology, and are therefore more open to scrutiny, criticism and, when the community concludes it is needed, correction. This is good. What's equally positive is that editors, reviewers, publishers, funders,research institutions, and the researchers themselves are actively playing a critical role in encouraging and assisting in the collective effort of using the new technology to increase transparency and reduce mistakes. Unfortunately, as happens in all areas of society, possibilities and technologies to identify problems often are one step ahead of the insights and technologies available to fix them.

Providing the opportunities for researchers to make the data associated with their articles more widely available to the scientific community is another important avenue for improving transparency. Improvements in scientific quality will also come from the new technologies and investments that are enabling more scientists to access, analyze, assess and even re-process the continuously growing and widening volume of scientific data that is being generated by the research community today.

The subject of failed reproducibility is important and deserves attention, but it, too, is not as simple to explain as The Economist would suggest. Several people noted in the article's comment thread that the difficulty in reproducing results lay partly with the methods used to reproduce the results and did not necessarily mean that the original work was questionable.

Dr. Boyana Konforti, Editor-in-Chief of Cell Reports, digs into the reproducibility issue further in her recent blog post "In Defense of Science." She notes, among other explanations, that "the irreproducibility of scientific results can be due to many different factors such as problems in experimental design, lack of proper controls, lack of technical expertise, flawed statistical analyses, and scientific misconduct."

Still, science has ways of addressing the reproducibility issue. Systematic reviews and meta-analyses, where the totality of available scientific evidence is considered, enable scientists to put results into context, preventing one positive or negative outlier from biasing our understanding of a particular scientific issue.

Publishers are also actively working to enhance peer review, which for all its virtues is not meant to fact-check the activities in the lab. To name a few initiatives, there is more training available along with improved databases and software to help, find and enlist the most appropriate and specialized experts in the field to peer review a paper.

Next, while the vast majority of reviewers still favor anonymous review, a growing number of reviewer reports are becoming available to readers for better transparency of what was exactly reviewed and how.

New possibilities for tackling the problem of irreproducible research are also provided through opportunities for researchers to make the data associated with their articles more widely available to the scientific community.

Reproducibility in science could also benefit from authoring and publishing negative results. Elsevier, in fact, is working on a series of new journals featuring negative results and focusing on the reproducibility of experiments and methods. We have also begun to roll out pre-registration of research reports, which guarantees publication of future results, regardless of findings, providing that they adhere precisely to their registered protocol.

In summary, the good news is that despite some flaws, the scientific community, utilizing peer review, serves the broader population well. Even better news is that the combination of improved technology, innovation and an experimental approach means problems are better and more quickly addressed as they come to light. We welcome a continued dialogue about these issues.


The Author

As VP and Head of Global Corporate Relations at Elsevier, Tom Reller (@TomReller) leads a global team of media, social and web communicators for the world's largest provider of scientific, technical and medical (STM) information products and services. Together, they work to build on Elsevier's reputation by promoting the company's numerous contributions to the health and science communities, many of which are brought to life in our new online community and information resource: Elsevier Connect.

Reller directs strategy, execution and problem-solving for external corporate communications, including media relations, issues management and policy communications, and acts as a central communications counsel and resource for Elsevier senior management. Additionally, he develops and nurtures external corporate/institutional relationships that broaden Elsevier's influence and generate good will, including partnerships developed through the Elsevier Foundation.

comments powered by Disqus

Related Stories