How editors and reviewers can contribute to minimize waste in research

The concept of evidence-based research

EVBRES banner image

On July 19th, 2001, "the United States Office for Human Research Protections (OHRP) suspended nearly all federally-funded medical research involving human subjects at Johns Hopkins University". The reason was simple (and tragic). A 24-year-old technician, Ellen Roche, volunteered to take part in a study and died. The study was designed to provoke a mild asthma attack to help doctors discover the reflex that protects the lungs of healthy people against asthma attacks. Ms. Roche inhaled hexamethonium but became ill and was put on a ventilator. Her condition deteriorated, and she sadly died on June 2, 2001.

The price of ignorance

One of the striking conclusions of an analysis of the events (and follow up) concluded that the project leader and the Institutional Review Board failed to uncover published literature concerning the toxic effects of inhaling hexamethonium. According to the OHRP, this information was “readily available via routine MEDLINE and Internet database searches, as well as recent textbooks". The project leader only performed a standard PubMed search and consulted standard, current edition texts. In other words, neither the project leader nor the ethics committee performed a sufficiently thorough literature search and thus remained ignorant of vital material.

A lesson from the giants of history

From the very beginning of the development of modern science it has been an explicit ideal that new knowledge is based upon existing knowledge. This is often illustrated by Sir Isaac Newton’s, "If I have seen farther it is by standing on the shoulders of giants". Consider also Lord Rayleigh’s 1884 observation that "the work which deserves, but … does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out”. To wit: each new result should be interpreted in the context of earlier research.

How to live up to this ideal, is the question. As Richard Light and David Pillemer argued 100 years after Rayleigh, researchers need to have a systematic and transparent knowledge about previous studies on the same topic. In many situations, this can be accomplished by the use of a systematic review of earlier similar studies.

The elephant in the room?

But, aren’t we just stating the obvious? Is this not already standard practice? One would think so! During the last 20-25 years a number of meta-studies have been published showing that researchers do not commonly use systematic reviews to justify new studies. In a seminal work, Karen Robinson and Steve Goodman demonstrated that less than 45% of newer studies referred to earlier similar studies. The authors also identified newer original studies that could have referred to between three and 130+ other studies (the median was two)! Other analyses (e.g. Fergusson, 2005; Robinson, 2011; Sawin, 2015) clearly indicate the same lack of systematicity when referring a reader to earlier similar studies (see figure below).

EVBRES figure

Figure 1
Three studies evaluated the Prior Research Citation Index (PRCI) as defined by Robinson 2011. PRCI is simply the number of earlier studies that were cited divided by the number the authors could have cited. On average the three meta-research studies found that only 21% of earlier similar studies were referred to.

Potential excuses aside, a plausible interpretation of the results from meta-research is that authors seem to have little intention of referring comprehensively to earlier studies but instead rely solely on those references which reinforce the arguments they seek to advance.

Reinventing the wheel, one study at a time

By doing so, earlier studies are not used to justify the new study and references become mere "window dressing". This has been shown to lead to considerable redundancy. A 2014 investigation identified 136 new studies published after a systematic review showed that the intervention was efficient. The most surprising thing though was that 73% of the new studies referred to the systematic review showing no need for further research. Another, more concerning phenomenon that has been brought to light (e.g. Lau, 1992; Juni, 2004; Fergusson, 2005; Andrade, 2013; Clarke, 2014; Haapakoski, 2015) is that studies are occasionally initiated even when the knowledge available at that time indicated clearly that the treatment was effective and there was no need for further research. We dare to conclude that there is room for improvement!

Introducing the EBRNetwork

To keep exposing patients to unnecessary studies is unethical, limits the funding available for important and relevant research, and diminishes the public´s trust in research. The EBRNetwork was established in 2014 to raise awareness of this inappropriate practice and to promote a systematic and transparent approach when justifying (or even designing) new studies.

In October 2018 a new EU-funded European network called "EVBRES" was established. The aim of EVBRES is to raise awareness of the need to use systematic reviews when planning new health studies and also when placing new results in context. EVBRES is an open network, hence everyone with interest in this topic is very welcome.

The role of editors and peer review

So where do editors and reviewers fit in? In 2016 the EBRNetwork identified a number of stakeholders relevant for the concept of EBR . Among the key stakeholders were editors and reviewers, whose responsibilities would be:

  • To assess whether the rationale and design of studies are adequately described within the context of systematic reviews of prior research.
  • To evaluate whether description of earlier research is sufficient to enable interpretation of the results of submitted studies within the totality of relevant evidence.
  • To evaluate whether proposals for further research take account of earlier and ongoing research.
  • To evaluate whether proposals for further research include clear descriptions of target populations, interventions, comparisons, outcome measures, and study types.

Have your say!

Obviously, these "responsibilities" are open for discussion, hence we hereby invite everyone to give their views and suggestions. This can be done by commenting below. You can also join EVBRES or contact the EBRNetwork if you are outside Europe. We look forward to hearing from you!


Written by

Hans Lund

Written by

Hans Lund

Hans Lund is Professor at the Centre for Evidence-Based Practice, Western Norway at the University of Applied Sciences, Norway. Hans has been working with evidence-based practice and systematic reviews for more than 25 years, both as a writer (of academic papers and books), teacher and director of studies. In the last 5-6 years Hans has focused on the concept of “Evidence-Based Research” (EBR) and is initiator and chair of the Evidence-Based Research Network. This network obtained support to establish a European Network for EBR in 2018 in the form of a COST Action. This COST Action was inaugurated in October 2018 and runs until October 2022.
Written by

Klara Brunnhuber

Written by

Klara Brunnhuber

Klara Brunnhuber works as Senior Product Manager in Elsevier Operations’ Digital Content Services department, with a special focus on machine learning projects and content services for the new book submission system ELSA.

With a background in medicine and health informatics, she has worked in medical and science publishing for over 25 years. Prior to joining Elsevier in October 2016, she was a clinical editor and later product manager for BMJ’s knowledge resources BMJ Clinical Evidence and BMJ Best Practice.

She has been part of the Evidence-Based Research Network ever since its inception and serves as Vice-Chair and Leader of the Programme Management Group on the COST Action EVBRES.

Related stories


comments powered by Disqus