Peer Review

Improving the reporting of clinical research

An editor’s view

Print Friendly and PDF
Share story:  

Dr David L SchrigerAs well as his role as Deputy Editor of the Annals of Emergency Medicine, Dr Schriger is also a member of the CONSORT and EQUATOR initiatives. His research focuses on improving the credibility of medical literature through the detailed presentation of results via figures and tables.

The last 20 years have seen much written about the poor quality of medical literature1. Recent endeavors such as EQUATOR (and its component reporting guidelines) and the Peer Review Congress (which has fostered interest in journal quality) have sparked considerable improvement. However, there is more to be done to improve both the quality of the science and the quality of the reporting of the science. Journals can play an important role in both areas. The first step for journals interested in doing so is to step beyond three common misconceptions.

First, there is a misguided obsession with statistics, a misconception that distracts authors, reviewers, and readers from more fundamental issues2. Classical statistics is concerned with differentiating observations expected by chance alone from those unlikely to be due to chance, thereby suggesting a potentially important association. While random error is a legitimate concern, particularly for small studies with positive results, in clinical research, concerns about random error are dwarfed, or should be dwarfed, by concerns about non-random error which is also known as confounding or bias3.

When problems occur in clinical studies they are typically related to the methodology of the study, not the statistics. In reviewing more than 2,500 papers for Annals of Emergency Medicine and other journals over the past 25 years, I have seldom found a paper for which the main deficiency was the use of the wrong statistic or the miscalculation of a statistic. In contrast, I routinely read studies that are poorly designed or fail to account for the presence of confounding in their analyses or conclusions. I also commonly find studies that devote multiple paragraphs in the Methods and Results sections to statistical concerns but fail to include even a single sentence about non-random error. A skeptic might think that the obsession with statistics is a diversionary smokescreen designed to distract readers from fundamental problems with confounding and bias.

Second, journals often have ill-defined goals for their review process. Review processes can ask several questions including:

a) Is the topic paper appropriate for our audience?

b) Is the reporting of the science complete? Does the paper provide all of the information that a knowledgeable, critical reader needs to reach a conclusion about the work

c) Is the science correct?

A common misconception is that c) is a legitimate goal of peer review. While it is certainly appropriate that the peer-review process filters out abject garbage (papers whose claims are unsubstantiated or ludicrous), caution should be taken to ensure that reviewers are critiquing the research design, analytic methods, and the quality of the reporting of the results, not the conclusion. Otherwise, journals will reject articles that conclude that ulcers are caused by bacteria just because the conclusion is unexpected. Instead, peer review should focus on ensuring that readers have all the information they need to reach their own decisions about the paper's conclusion. From this perspective, peer review's purpose is to bring to readers complete presentations that meet methodological standards and standards for comprehensive reporting. Don't worry whether the authors have found truth, worry about whether they have told a complete story. The scientific process will take care of the rest4.

The third misconception is that article quality is the responsibility of the authors, not the journal. While it is certainly true that better journals tend to get better papers, there is ample evidence that the papers of the highest impact journals have problems with incomplete or suboptimal reporting5-6. Research suggests that these problems are only corrected if the journal identifies them and insists that they be fixed7-8. A journal must take an active role in setting expectations and enforcing them if the reporting of science is to be improved.

At Annals of Emergency Medicine, we recognized these issues and have taken a series of steps to improve our journal. I share with you a number of them so you may consider whether they would be appropriate for your journal.

In 1997, the editors recognized that bias was the greatest threat to the veracity of the work being published and decided that all research papers would be reviewed by one of a small cadre of ‘methodology/statistics’ reviewers in addition to the typical content reviewers. Experience had shown us that the ideal person to perform this function is not a full-time statistician but a clinician-researcher who thoroughly understands methodology and knows enough statistics to know when formal statistical review is needed. This program has proved successful - the quality of reviews has improved as has the quality of the published papers9-11. Starting six years ago, this program was supplemented by a check for the appropriateness and quality of tables and figures in papers about to be offered acceptance or revision12-13.

These two programs have improved the journal and have slowly trained the author community about the journal's standards (which are stated in detailed Instructions for Authors initially composed in 200314-15). Over time, the methodology/statistical reviewers have had an easier time because papers come in with many of our requirements already met. In summary, our experience leads me to offer the following guidance to journals trying to improve their quality:

1) The main problem is study methodology, not statistics. Put your efforts into carefully critiquing each paper's methodology. Do not assume that regular reviewers will do this well. Identify reviewers who are capable of doing this job and use them. With more and more physicians getting clinical epidemiology training in public health and other graduate programs, finding such reviewers is getting easier. If you want them to do lots of reviews, compensate them.

2) The second problem is the quality of reporting. Get familiar with EQUATOR-network.org and the reporting guidelines for different types of research (CONSORT, STAR-D, PRISMA, STROBE...). Recognize, however, that these guidelines may be insufficiently detailed regarding specific nuances of your field and are not as strong on the presentation of results as they are on the presentation of methods. Augment them as needed.

3) Discourage papers that hide behind a torrent of statistics and models instead of showing readers the actual data. Editors and reviewers should ask "are methods and results presented in sufficient detail that learned readers can decide whether they agree or disagree with the conclusion"? Focus on whether the paper is fully reported rather than whether the science is correct or not.

By refocusing peer review on the paper's methodology - as opposed to its statistics - and on the quality of the reporting of the science, editors can improve the quality of research articles in their journals.

References:

1 DG Altman. The scandal of poor medical research.  BMJ 1994;308:283–284

2 Schriger DL. Problems with current methods of data analysis and reporting, and suggestions for moving beyond incorrect ritual. Eur J Emerg Med. 2002;9:203-7.

3 Goodman, S.N. Toward evidence-based medical statistics. 1: The p-value fallacy.  1999 Ann. Int. Med;130:995–1004.

4 Ziman JM. Reliable Knowledge: An Exploration of the Grounds for Belief in Science. Cambridge University Press: Cambridge, 1978.

5 Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008; 336:1472–1474.

6 Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ2010; 340:c723.

7 Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia2006; 185:263267.

8 Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Ann Intern Med. 1994;121:11-21.

9 Schriger DL, Cooper RJ, Wears RL, Waeckerle JF. The effect of dedicated methodology and statistical review on published manuscript quality. Ann Emerg Med. 2002;40:334-337.

10Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals: caveat lector? J Gen Intern Med. 1998;13(11):753-6

11 Day FC, Schriger DL, Todd C, Wears RL. The use of dedicated methodology and statistical reviewers for peer review: A content analysis of comments to authors made by methodology and regular reviewers. Ann Emerg Med. 2002;40:329-33.

12 Cooper RJ, Schriger DL, Tashman D.  An evaluation of the graphical literacy of the Annals of Emergency Medicine.  Annals of Emergency Medicine. 2001;37(1):13-19.

13 Cooper RJ, Schriger DL, Close RJ. Graphical literacy: The quality of graphs in a large-circulation journal. Ann Emerg Med. 2002;40:317-22.

14 Cooper RJ, Wears RL, Schriger DL. Reporting research results: recommendations for improving communication. Ann Emerg Med. 2003 Apr;41(4):561-4.

15 Schriger DL.  Suggestions for improving the reporting of clinical research: the role of narrative. Ann Emerg Med. 2005;45:437-43.

comments powered by Disqus

Share story: