Bias in research: the rule rather than the exception?
Discussing some of the causes and prevalence of bias in the fields of biomedical research
By Kevin Mullane & Michael Williams Posted on 17 September 2013
Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research - and the implications for the wider research community.
As the primary purpose of scientific publication is to share ideas and new results to foster further developments in the field, the increasing prevalence of fraudulent research and retractions is of concern to every scientist since it taints the whole profession and undermines the basic premise of publishing.
While most scientists tend to dismiss the problem as being due to a small number of culprits - a shortcoming inherent to any human activity - there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between "sloppy science", "misrepresentation", and outright fraud (1).
Bias in research, where prejudice or selectivity introduces a deviation in outcome beyond chance, is a growing problem, probably amplified by:
- the competitive aspects of the profession with difficulties in obtaining funding;
- pressures for maintaining laboratories and staff;
- the desire for career advancement (‘first to publish’ and ‘publish or perish’); and, more recently,
- the monetization of science for personal gain.
Rather than being "disinterested contributors to a shared common pool of knowledge" (2), some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science.
Bias tends to be obscured by the sheer volume of data reported. The number of publications in Life Sciences has increased 44% in the last decade, and at least one leading biomedical journal now publishes in excess of 40,000 printed pages a year. Data is generally viewed as a "key basis of competition, productivity growth...[and]... innovation" (3), irrespective of its conception, quality, reproducibility and usability. Much of it, in the opinion of Sydney Brenner, has become "low input, high throughput, no output science" (4).
Indeed, while up to 80% of research publications apparently make little contribution to the advancement of science - "sit[ting] in a wasteland of silence, attracting no attention whatsoever" (5), it is disconcerting that the remaining 20% may suffer from bias as reflected in the increasing incidence of published studies that cannot be replicated (6,7) or require corrections or retractions (8), the latter a reflection of the power of the Internet.
Categories of bias
Although some 235 forms of bias have been analyzed, clustered and mapped to biomedical research fields (9), for the purposes of this brief synopsis, a cross-section of common examples are grouped into three categories:
1. Bias through ignorance can be as simple as not knowing which statistical test should be applied to a particular dataset, reflecting inadequate knowledge or scant supervision/mentoring. Similarly, the frequent occurrence of inappropriately large effect sizes observed when the number of animals used in a study is small (10-13), that subsequently disappear in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, may reflect ignorance of the significance of determining effect sizes and conducting power calculations (11,12,14).
The concern with disproportionate large effect sizes from small group sizes has been recognized by the National Institutes of Health (NIH) (15), which now mandates power calculations validating the number of animals necessary to determine if an effect occurs before funding a program. However, this necessitates preliminary, exploratory analyses replete with caveats, which might not get revisited, and is not a requirement with many other funding agencies. Too often studies are published with the minimal number of animals necessary to plug into a Student's t-test software program (n=3) or based on 'experience' or history. Replication of any finding as a standard component of a study is absolutely critical, but rare.
2. Bias by design reflects critical features of experimental planning ranging from the design of an experiment to support rather than refute a hypothesis; lack of consideration of the null hypothesis; failure to incorporate appropriate control and reference standards; and reliance on single data points (endpoint, time point or concentration/dose). Of particular concern is the failure to perform experiments in a blinded, randomized fashion, which can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were appropriately blinded or randomized (16). While the impact of randomization might come as a surprise, since many animal studies are conducted in inbred strains with little heterogeneity, the opportunity to introduce bias into non-blinded experiments, even unintentionally, is very obvious. It is paramount that the investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g. dropped from the analysis), or what sub-groups are to be considered, must be established a priori and effected before the study is un-blinded. Despite its importance in limiting bias, one analysis of 290 animal studies (16) and another of 271 publications (15) revealed that 86-89% were not blinded.
Another important consideration in experimental design is the control of potentially confounding factors that can influence the experimental outcome indirectly. In the field of pharmacology, at a basic level this might include the importance of controlling blood pressure when conducting evaluations of compounds in preclinical studies of heart attack, stroke or thrombosis; or the recognition that most compounds lose specificity at higher doses; but consideration might also need to be given to other factors such as the significance of chronobiology (where, for example, many heart attacks occur within the first 3 hours of waking), referenced in (30).
3. Bias by misrepresentation. Researchers are an inherently optimistic group – the 'glass half full' is more likely brimming with champagne than tap water. Witness the heralding of the completion of the Human Genome Project or the advent of gene therapy, stem cells, antisense, RNAi, any "-omics" - all destined to have a major impact on eradicating disease in the near-term. This tendency for over-statement and over-simplification carries through to publications. The urge and rush to be first to publish a new "high-profile" finding can result in "sloppy science" (1), but more significantly can be the result of a strong bias (17). Early replications tend to be biased against the initial findings, the Proteus phenomenon, although that bias is smaller than for the initial study (17). It is not clear which is more disturbing – the level of bias and selective reporting found to occur in the initial studies; the finding that ~70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it even has a name.
A recent evaluation of 160 meta-analyses involving animal studies covering six neurological conditions, most of which were reported to show statistically significant benefits of an intervention, found that the "success rate" was too large to be true and that only 8 of the 160 could be supported, leading to the conclusion that reporting bias was a key factor (18).
The retrospective selection of data for publication can be influenced by prevailing wisdom promoting expectations for particular outcomes, or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible.
While research misconduct in terms of overt fraud (1,19,20) and plagiarism (21) is a topic with high public visibility, it remains relatively rare in research publications while data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings (6,7).
Scientific bias represents a proverbial "slippery slope", from the subjectivity of "sloppy science" (1) and lack of replication (22) to the deliberate exclusion or non-reporting of data (6,7) to outright fabrication (19,20). Plagiarism, distortion of data or its interpretation, physical manipulation of data, e.g., western blots (23), NMR spectra (24) to make the outcomes more visually appealing or obvious (often ascribed to the seductive simplicity of PowerPoint and the ease of manipulation with Photoshop), and blatant duplicity in the biopharma industry in the selective sharing of clinical trial outcomes (25) with inconclusive/negative trials often not reported (26), all contribute to the expanding concerns regarding scientific integrity and transparency.
This is an issue that obviously increases in importance as the outcomes of investigator bias impact the expenditure of millions of dollars on research programs that are progressed based on data presented; where inappropriate New Chemical Entities are advanced into clinical trials also exposing patients to undue risk; and unvalidated biomarkers are promoted to an anxious and misinformed public.
With the increase in bias, data manipulation and fraud, the role of the journal editor has become more challenging, both from a time perspective and with regards to avoiding peer-review bias (27). And while keeping the barriers high (8,28), much of the process still depends on the integrity and ethics of the authors and their institutions. It is paramount that institutions, mentors and researchers promote high ethical standards, rigor in scientific thought and ongoing evaluations of transparency and performance that meet exacting guidelines. Clinical trials with a full protocol defining size of the study, randomization, dosing, blinding and endpoints have to be registered before the study can begin, and, at the conclusion of the study, every patient has to be accounted for and included in the analysis. A proposal has been made (29) that non-clinical studies should adopt the same standards and, while not a requirement, such guidelines provide a useful rule of thumb to consider when designing any study. These topics, and their impact on the translation of research findings to the clinic, will be discussed in greater detail in an upcoming article in Biochemical Pharmacology (30).
CARDIOVASCULAR EDITOR, BIOCHEMICAL PHARMACOLOGY & PRESIDENT, PROFECTUS PHARMA CONSULTING INC.
Kevin’s main guise has been as a drug hunter at multinational pharmaceutical (Wellcome, CIBA-Geigy) and biotechnology companies (Gensia, Chugai Biopharmaceuticals), before becoming President and CEO of Inflazyme Pharmaceuticals. Subsequently he has been an advisor to industry, academia, foundations and VC companies, evaluating technologies and developing translational opportunities. Kevin received his PhD from the University of London.
COMMENTARIES EDITOR, BIOCHEMICAL PHARMACOLOGY & ADJUNCT PROFESSOR, DEPARTMENT OF MOLECULAR PHARMACOLOGY AND BIOLOGICAL CHEMISTRY, FEINBERG SCHOOL OF MEDICINE, NORTHWESTERN UNIVERSITY, CHICAGO.
Mike retired from the pharmaceutical industry in 2010 after 34 years in drug discovery research with Merck, CIBA-Geigy, Abbott and Cephalon. He has been actively involved with the biotech industry as a consultant, SAB member and executive (Nova, Genset, Adenosine Therapeutics, Antalium, Tagacept, Elan, Molecumetics) and has published extensively in the areas of pharmacology and drug discovery. He received his PhD and DSc degrees from the University of London in an era long before e-books could be downloaded.
(1) Stemwedel JD, “The continuum between outright fraud and "sloppy science": inside the frauds of Diederik Stapel (part 5)”, Scientific American June 26, 2013.
(2) Felin T, Hesterly WS, "The Knowledge-Based View, Nested Heterogeneity, And New Value Creation: Philosophical Considerations On The Locus Of Knowledge", Acad. Management Rev 2007, 32: 195–218.
(3) Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roxburgh C, Byers AH, “Big data: The next frontier for innovation, competition, and productivity“, McKinsey Global Institution, April 2011.
(4) Brenner S, “An interview with... Sydney Brenner”, Interview by Errol C. Friedberg, Nat Rev Mol Cell Biol 2008; 9:8-9.
(5) Mandavilli A, “Peer review: Trial by Twitter”, Nature 2011; 469, 286-7.
(6) Prinz F, Schlange T, Asadullah K, “Believe it or not: how much can we rely on published data on potential drug targets?”, Nature Rev Drug Discov 2011; 10: 712-3.
(7) Begley CG, Ellis LM, “Drug development: Raise standards for preclinical cancer research“, Nature 2012, 483, 531-533.
(8) Steen RG, Casadevall A, Fang FC, “Why has the number of scientific retractions increased?“, PLoS ONE 2013: 8: e68397.
(9) Chavalarias D, Ioannidis JPA, “Science mapping analyses characterizes 235 biases in biomedical research”, J Clin Epidemiol 2010; 63: 1205-15.
(10) Ioannidis JPA, “Why most published research findings are false“, PLoS Med 2005: e124.
(11) Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al., “Power failure: why small sample size undermines the reliability of neuroscience”, Nat Rev Neurosci 2013; 14: 365-76.
(12) Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG, “Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments“, PLoS Med 2013: e1001489.
(13) Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR, “Publication bias in reports of animal stroke studies leads to major overstatement of efficacy“, PLoS Biol 2010; 8: e1000344.
(14) Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al., “Survey of the quality of experimental design, statistical analysis and reporting of research using animals“, PLoS One 2009; 4: e7824.
(15) Wadman M, “NIH mulls rules for validating key results”, Nature 2013: 500:14-6.
(16) Bebarta V, Luyten D, Heard K, “Emergency medicine animal research: does use of randomization and blinding affect the results?”, Acad Emerg Med 2003; 10; 684-7.
(17) Pfeiffer T, Bertram L, Ioannidis JPA, “Quantifying selective reporting and the Proteus Phenomenon for multiple datasets with similar bias“, PLoS One 2011; 6: e18362.
(18) Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al., “Evaluation of excess significance bias in animal studies of neurological diseases“, PLoS Biol 2013; 11: e1001609.
(19) Kakuk P, “The Legacy of the Hwang Case: Research Misconduct in Biosciences”, Sci Engineer Ethics 1; 2009: 645-62.
(20) Bhattacharjee Y. “The Mind of a Con Man“, New York Times Magazine April 26, 2013.
(21) “Science publishing: How to stop plagiarism”, Nature 481, 21–23.
(22) Ivan Oransky, “The Importance of Being Reproducible: Keith Baggerly tells the Anil Potti story, Retraction Watch, May 4, 2011.
(23) Rossner M, Yamada KM, “What's in a picture? The temptation of image manipulation”, J Cell Biol 2004;166:11-5.
(24) Smith III AB, “Data Integrity”, Organic Letts 2013, 15: 2893-4.
(25) Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, Kaiser T et al., “Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials”, BMJ 2010;341:c4737.
(26) Doshi P, Dickersin K, Healy D, Vedula SW, Jefferson T, “Restoring invisible and abandoned trials: a call for people to publish the findings”, BMJ2013; 346:f2865.
(27) Lee CJ, Sugimoto CR, Zhang G, Cronin B,”Bias in peer review”, J. Amer Soc Info Sci Technol 2013: 64:2-17.
(28) “Reducing our irreproducibility”, Nature 2013: 496: 398.
(29) Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG, “Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research“, PLoS Biol 2010; 8: e1000412.
(30) Mullane K, Winquist RW, Williams M, “The translational paradigm in drug discovery”, Biochemical Pharmacology, 2014.
Daniel Dietrich says: September 19, 2013 at 1:39 pm
While I wholeheartedly support your article and can corroborate many of the issues you had addressed, I am a little sensitive to one point in your article:
With the statement "some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science." is true but is not exclusively true for industry collaborators. Indeed, a similar observation can be made for scientists puirporting a specific visions whiel aiming to renew their funding whether public or from NGO's. So its not a problem from where the funding comes from, but rather the real issue is the lack of integrity of the individual scientist. In short, amongst academics receiving "only" public or NGO support there are just as many opportunists.
Mike Williams says: September 19, 2013 at 3:44 pm
Thank you for the comment. We certainly don't disagree with your viewpoint – our point would be that a lack of integrity can be greatly facilitated by the presence of industry/VC funding which is usually for personal gain.
Emma says: September 20, 2013 at 1:42 am
I would add reviewer bias to the list. Reviewer bias may be greatly impacting what is actually getting out there in publications
Mike Williams says: September 20, 2013 at 12:49 pm
Emma – space considerations precluded a comment on that, editor bias and reader bias. Check out our upcoming paper ref 30 for additional comment.
Frank says: September 20, 2013 at 7:45 pm
Needless to say that in the social sciences, the documented pressures have an even stronger detrimental effects. To the extent that major parts of the social sciences are occupied with 'constructing reality', or at least interpreting reality, researcher integrity becomes a problematic concept in itself and policies intended to 'enhance researcher integrity' necessarily inadequate.
Mike Williams says: September 23, 2013 at 3:33 pm
Frank – thanks you your comment – we agree that the social sciences have an even bigger challenge. We'll take a rodent as an experimental subject over a human – any day.
Malcolm Macleod says: September 22, 2013 at 9:56 am
Thanks for a nice discussion of the issues. I think the solution lies in two things; Firstly, as you say, protocols defined before the experiment commences, placed somewhere so at least peer reviewers (and probably readers) can check what was done against what was planned. Critically, these should include the statistical analysis plan and the definition of the primary outcome. Secondly, institutions and individual researchers should engage in audit and improvement activity to get a reliable feel of where they are just now in the bias spectrum (always further to the wrong end than they would like to think) and establish some mileposts for improvement.
Mike Williams says: September 23, 2013 at 3:47 pm
I believe this is what Hooijmans and Ritskes-Hoitinga (PLoS Med 2013, 10: e1001482), Ben Goldacre and Martin Wehling – among many others – are advocating. It's all about about being systematic, transparent and objective none of which is in very abundant supply in today's research world. Our motivation in writing this brief note was a response to the increasingly poor standards of the papers we review and reject. Unfortunately, many of the authors of these wouldn't know where to start in addressing these issues due to a lack of appropriate training and mentoring. Our colleague Jeff Idle, who is on the EAB of Biochemical Pharmacology, made the following comment "Science is a hard task master and its description should not be varied at the whim of the authors."
It's my top quote for 2013 ahead of red lines and "What does it matter anyway?".
Thanks for your comment