Elsevier Connect
Skip Navigation
Research Evaluation & Funding

How do you know if a research study is any good?

Science writers get tips from two epidemiologists and an MD/journalist

[caption id="attachment_15973" align="alignnone" width="800"] “Scientific Studies: All You Need to Know About What You’re Really Reading” Panelists presented to a joint meeting of Science Writers in New York (SWINY) and the New York City Metro Chapter of the Association of Health Care Journalists at the City University of New York Graduate School of Journalism. (Photos by Alison Bert, Editor-in-Chief, Elsevier Connect)[/caption]

Anecdotes can make data come alive, but anecdotal evidence is an oxymoron. – Bonnie Kerker, PhD

Making comparisons based on raw numbers ignores context. For example, 25 is a smaller number than 50, but 25 miles per hour is faster than 50 miles per day. – Carolyn Olson, MPH

Writing about a study after reading just a press release or an abstract … is journalistic malpractice. –Ivan Oransky, MD

This was some of the advice given by the three panelists at the second joint meeting of Science Writers in New York (SWINY) and the New York City Metro Chapter of the Association of Health Care Journalists November 29. They addressed about 100 science writers, some of them graduate students, at the City University of New York Graduate School of Journalism. [caption id="attachment_16000" align="alignright" width="400"] Science writer Melinda Wenner Moyer, an adjunct professor in the CUNY Graduate School of Journalism, moderated the event.[/caption]


The speakers were Bonnie D. Kerker, PhD, Assistant Commissioner and Senior Epidemiologist for the New York City Department of Health and Mental Hygiene (NYC DOHMH); Carolyn “Cari” Olson, MPH, Director of the Community Epidemiology Unit for the NYC DOHMH; and Ivan Oransky, MD (@ivanoransky), executive editor of Reuters Health and Clinical Assistant Professor of Medicine for the NYU School of Medicine. Dr. Oransky also teaches medical reporting in the Science, Health and Environmental Reporting Program of NYU’s Arthur L. Carter Journalism Institute, and he is one of the founders of Retraction Watch, which was subject of a previous article on Elsevier Connect.

The panel was organized by SWINY Board member Rita Baron-Faust, MPH, a health educator , medical journalist and consultant for the NYU School of Medicine. The event was moderated by Melinda Wenner Moyer (@Lindy2350), an adjunct professor in the CUNY Graduate School of Journalism who writes regularly for magazines including Scientific AmericanSlateNature Medicine and Popular Science as well as her blog Body Politic.

Their presentation, “Scientific Studies: All You Need to Know About What You’re Really Reading” was intended to help journalists “take a fresh look at the various types of studies flooding our inboxes, what they are designed to show, what they are actually showing, and why we all need a healthy dose of skepticism and avoid falling into the black-hole hype of dubious studies, reports and press releases,” according to the invitation.

Carolyn Olson: ‘To evaluate a study, you need context’

[caption id="attachment_15956" align="alignright" width="400"] Epidemiologist Carolyn “Cari” Olson is Director of the Community Epidemiology Unit for the NYC Department of Health and Mental Hygiene.[/caption]

Epidemiologist Carolyn Olson emphasized the importance of questioning health data: “First you must ask is there a comparison? Are the groups really comparable? Then you need to know are the differences being reported real? Can anything else explain association? What can (and can’t) this study tell us and how should findings be accurately presented?”

Noting that most data interpretation requires context, Olson discussed the importance of control groups. She said that without a comparison, the likelihood that findings are due to factors other than the hypothesized cause cannot be assessed.

Olson also discussed rates, which she called a basic epidemiologic tool because they allow for appropriate comparisons. To illustrate her point, she noted that there were 1,765 deaths due to heart disease in Flushing, Queens, in 2002 and 882 in Pelham Bay in the Bronx. She asked, “If you had a choice, where would you want to live? It seems safer to live in Pelham Bay.”

In fact, it is not. Statistically speaking, you are better off living in Flushing, because your risk of dying from heart disease is lower when taken into account that Flushing has a population of about 500,000 people versus Pelham Bay, which has about 250,000.

“The bottom line is that to evaluate a study you need context,” she said. To illustrate her point, Olson said, “If you wrote an article that said people in New York City like sushi but you only spoke to 10 people, anyone reading your article would know that this is too small a sample to make such a conclusion in a city that has 8 million people.”

Dr. Bonnie Kerker: ‘How meaningful is this study?’

[caption id="attachment_15958" align="alignright" width="400"]Bonnie D. Kerker, PhD, is Assistant Commissioner and Senior Epidemiologist for the New York City Department of Health and Mental Hygiene. Bonnie D. Kerker, PhD, is Assistant Commissioner and Senior Epidemiologist for the NYC Department of Health and Mental Hygiene.[/caption]

Dr. Kerker gave an overview of the various types of studies, which include cross-sectional, case-control, prospective cohort and randomized controlled trials (RCTs). She noted that although RCTs are considered the gold standard, it is very difficult to do an RCT in public health.

“Randomized control trials are used extensively in drug trials and are done to show whether one drug is superior to another,” she explained. “Unfortunately, you do not find many in public health, in part due to ethical concerns.”

For example, it would not be ethical to assign one group to smoke and another not to in order to study the population effects of smoking on disease.

Dr. Kerker explained that studies other than RCTs are used in public health to detect risk factors that are associated with disease. One of these is the prospective cohort study. [note color="#f1f9fc" position="center" width=800 margin=10]

Prospective cohort study – a research study that follows over time groups of individuals who are alike in many ways but differ by a certain characteristic (for example, female nurses who smoke and those who do not smoke) and compares them for a particular outcome (such as lung cancer).” — National  Cancer Institute[/note] [caption id="attachment_15947" align="alignright" width="388"]Dr. Kerker urges science writers to question all aspects of the studies they report on. This was one of her slides.[/caption]

To bring data alive, journalists often use real-life stories, or anecdotes. Dr. Kerker warned the audience to be careful how they use anecdotes. Since they do not have the validity of scientific data, she suggested using them to illustrate the findings of the study, not the exception.

“Anecdotal evidence is an oxymoron,” she said. “(Anecdotes) should not be the only 'counterfactual' argument against data.”

She said fairness means presenting data from both sides, stating its limitations.

Dr. Kerker ended her talk with these tips:

  • Always source the data clearly, providing information and links to original research. (How big is the population that these findings apply to and what population exactly is referenced?)
  • Provide clear context of the literature base and importance of findings.
  • Question researchers on the limitations of their data.
  • Researcher “headlines“ (titles/abstracts) can be misleading.

Dr. Ivan Oransky: don’t commit ‘journalistic malpractice’

[caption id="attachment_15970" align="alignnone" width="800"]Ivan Oransky, MD, is Executive Editor of Reuters Health and Clinical Assistant Professor of Medicine for the NYU School of Medicine. Ivan Oransky, MD, is Executive Editor of Reuters Health and Clinical Assistant Professor of Medicine for the NYU School of Medicine.[/caption]

In his presentation “Evaluating Medical Evidence for Journalists,” Dr. Oransky (@ivanoransky) warned journalists to make sure they understand the studies they write about — and to be willing to question the methods or findings.

He noted that retracted articles are on the rise, 339 articles worldwide retracted in 2010 and about 400 in 2011. Examples on his blog Retraction Watch include a mathematics study that was retracted because it didn't make sense mathematically; a study with a fake author; and a study that not only had mistakes, but one of the authors failed to tell the other author the contents of the manuscript – or the fact it was submitted for publication. In one case, an article was retracted because the editors said that it had no scientific content. [caption id="attachment_15946" align="alignright" width="383"]Dr. Ivan Oransky quotes an article from the New England Journal of Medicine, which he describes as "refreshingly honest" on the issue of embargo policies. With this slide, Dr. Ivan Oransky quotes an article from the New England Journal of Medicine, describing it as "refreshingly honest" about embargo policies.[/caption]

After emphasizing that studies can be flawed, Dr. Oransky stressed the importance of understanding a study thoroughly before reporting on it. “Writing about a study after reading just a press release on abstract, without reading the entire paper, is journalistic malpractice,” he said.

He said there is no excuse for journalists not to read the full study, even if they are freelancers with no direct access to journals. To get access to full-text studies, he presented the following options:

Also, NASW members get free access to ScienceDirect and Annual Reviews.

If all else fails, he said, journalists can always write to the authors directly, or to the press officers of authors’ institutions, and request a PDF of the article.

He also pointed out that journalists can get access to embargoed press releases by registering with EurkeAlert, the online science news service sponsored by the American Association for the Advancement of Science (AAAS).

However, he criticized the practice of publishers embargoing journal articles while they are already available to subscribers, one of the issues he explores on his blog Embargo Watch. He urged the journalists in the audience not to get caught up in the hype of an embargo when determining whether a study is important.

When reading a research study, he said, journalists need to ask:

  • How good was the study? (Was it peer-reviewed? How large was a sample? Were experiments done in humans?)
  • What’s your angle? (Who are you trying to help? Readers, listeners and viewers to make better health-care decisions?)
  • Who could benefit? (How many people have the disease? Is it that significant?)
  • How effective is the treatment and what are the side effects or complications?
  • Who dropped out of the trial and why?
  • How much does the new drug or intervention cost? (Hint: you may not get an answer)
  • Did the authors disclose their conflict of interests, if any?
  • Did the study compare the new treatment to existing alternatives or to placebo? And if so, how much better is his new treatment and how expensive will be?
In closing, Dr. Oransky showed his trademark irreverent humor in sharing “a dirty little secret”: “Keep a biostatistician your back pocket,” he said. “They are very useful and often very lonely. Invite them for a cup of coffee or lunch.” [note color="#f1f9fc" position="center" width=800 margin=10]

Note from the Author

[caption id="attachment_15965" align="alignleft" width="400"] David Levine is co-chairman of Science Writers in New York[/caption]

I thought this was a great program for many reasons. Besides being the second joint SWINY-AHCJ meeting, it was the first time I had heard the point of view of an epidemiologist on the topic. It helped me to understand why it is so hard to do an RCT in public health. Dr. Kerker and Ms. Olson also gave great definitions and explanations about rates, statistical testing, confidence intervals, confounding factors and the nuts and bolts of cross-sectional, case-control, ecologic and qualitative studies.

It also made me think about the limitations of RCTs, which have always been the “gold standard” of clinical trials. Many years ago, I interviewed the late Thomas Chalmers, MD, who was an eminent pioneer in controlled trials, and one of first researchers to promote meta-analysis, which puts together many trials in an attempt to summarize what is known about a specific topic. I asked him about the criticism of meta-analysis which some researchers questioned and even said was “junk science.” Dr Chalmers said to me, "Do you really think RCTs are flawless? Because if you do, they are not. There are many journal studies  which have data from poorly designed RCTs."

As we have seen a drugs recalls over the years, (such as Vioxx and Bextra and Avastin) as well as the need for boxed warnings about antidepressants and suicidal ideation in teenagers, that science is a process and sometimes the experts are wrong. So the take-home message to me was the importance of knowing not only the results of a study, but its size, endpoints and methodology and to remember that the passage of time is an important factor as well.

[divider]David Levine (@Dlloydlevine) is co-chairman of Science Writers in New York (SWINY) and a member the National Association of Science Writers (NASW). He served as director of media relations at the American Cancer Society and as senior director of communications at the NYC Health and Hospitals Corp. He has written for Scientific American. Good Housekeeping, BioTechniques, and Robotic Trends and was a contributing editor at Physician’s Weekly for 10 years. He has a BA and MA from The Johns Hopkins University. [/note]



comments powered by Disqus