How can scholarly journals promote transparent and responsible research?
That’s the question Dr. Mario Malički, a postdoctoral researcher at the Academic Medical Center of the University of Amsterdam, and his colleagues set out to answer in a four-part study funded by Elsevier. Here, Dr. Malički reveals what they discovered.
I’m very idealistic. I studied to become a medical doctor in Croatia, and then fell in love with medical ethics and research. During my training, I remember one of my professors telling me that in engineering, it’s easy to determine if a product is working. When engineers get a blueprint, if they cannot build a functioning product, no matter where in the world they are, the blueprint is wrong.
Things turned out not to be so simple in the rest of the sciences, however.
There has been a lot of talk lately about the lack of reproducibility and transparency in research, as well as pervasiveness of detrimental research practices. So, when I saw a job advert for this project, which is looking at what the scientific community – especially journals – can do to foster transparent and responsible research practices, I felt it had been written for me.
I joined the project team in May last year, and we agreed to conduct four studies:
- A look at the past – a systematic review of studies that analyzed instructions to authors to find out how scientific scholarship has changed over time, especially in regard to research integrity topics.
- A study of current instructions to authors across all sciences, to find out what research integrity topics are currently relevant to journals.
- A survey of authors, editors, and reviewers, asking for their views on research transparency and detrimental research practices.
- A look at the future – interviews and interactive workshops with authors, editors, and reviewers on where they think scientific publishing is heading.
To date, we’ve completed all of them and are in the process of analyzing the data we collected and writing up the studies. But we couldn’t help looking at the preliminary data (and presenting them at conferences), and also reflecting on the surprises we encountered along the way.
Step one – A look at the past
The systematic review revealed 212 publications have analyzed journal policies, of which 153 exclusively looked into instructions to authors. What soon became clear though, is that the majority of these focused on health science journals.
What surprised us is that more than 100 different topics were analyzed in these studies, and when we looked at the methodology, for a majority of the studies it was unclear just how had they analyzed the instructions. Reading and reporting on them comes to mind, of course, but it was rarely stated who had done the work, and whether more than one person was reading and coding. It was also unclear which version of the instructions they had used or the year of the guidelines they were analyzing. And, what we later discovered during our second step is that journal websites don’t host old versions of their author instructions and rarely date and number their versions. With software, on the other hand, you always know exactly which version you are dealing with.
The positive thing we found is that over time, many research integrity topics were being covered by an increasing number of journals, but differences between disciplines were huge, and none of the research integrity topics were covered by all journals. The most covered was handling of conflicts of interest in health science journals.
Step two – Where are we today?
Our goal was to analyze a truly representative sample of journals across all disciplines, countries and journal impact. Our calculation told us we needed a total of 850 journals, so we sampled Scopus by randomly selecting an equal number of journals from each SNIP (Source Normalized Impact per Paper) tercile within each discipline. I was surprised that there were almost 6,000 publishers in Scopus, which publish 56 percent of all journals indexed there; the remaining 44 percent are published by four big publishers: Elsevier, Wiley, Springer Nature and Taylor & Francis.
Even more surprising to me was that doing a Google search did not find the websites of many of the journals in our sample, especially in case of Chinese journals. Instead, I had to contact authors who had published in these journals and ask for their help in finding them.
And finally, 70 percent of the 125 journals I contacted, irrespective of their country of publishing, never replied to my inquiries about specifics in their policies. I do hope authors submitting papers are luckier in their communication with the journals.
In the end, we analyzed 19 topics related to transparency and research integrity, and all but three had been covered by less than a third of the journals in our sample. This is where the idealist in me was disappointed. I wondered: is it that these topics are so unimportant to many scientists? Or is it that because we do not know collectively how to deal with them appropriately that journals are choosing, for now, to remain silent about these issues? Formatting and citation were still the main focus of author instructions, which to me seems a bit outdated in this machine-reading era.
When our analysis of journals was nearing an end, we sent a survey to 100,000 authors in Scopus that had recently published an article. A quarter of those emails returned a non-delivery report, implying that getting in touch with authors is as elusive as contacting journals. As is common for large online surveys, our response rate was low, a little below 5 percent, but we did reach researchers from 126 countries.
Again, what those researchers reported was a bit disappointing for me. More than three quarters did not agree researchers should register their studies before data collection begins, nor that they should specify their data analysis plans, and 40 percent did not believe authors must share their data with others. Additionally, only half of them believed that the quality of mentoring of young researchers was high. And finally, which was not surprising, the most prevalent detrimental practice they reported was manipulation of authorship in their field.
In short, neither the journal policies nor the authors’ attitudes were where I hoped they would be in 2018.
Step three – What could the future bring?
We interviewed researchers and editors and held interactive workshops at several conferences. We looked at how reviewer reports could be improved, how prospective publication of study protocols could be enhanced and how good ways of reporting study limitations could be encouraged. We also looked at visions for the future of scholarly publishing and the pros and cons of switching to green open access publishing (pre-prints) and post-publication commenting.
Suggestions were wide-ranging: using Wikipedia-style versioning of papers, blockchain technology, linking ethics approvals, protocols, data, published studies and post-publication commenting, and embedding them all within the articles.
The responses only strengthened the overall impression we got during those talks: that the current system can be significantly improved. While there was excitement for what the future could bring, there were also worries that diversity and uniqueness of different fields could be lost. New mechanisms for advancements and funding distribution would need to be devised, as well as those that would ensure research from junior researchers or on non-hype topics would be equally visible and appropriately handled if pre-prints were to become the dominant means of communication.
Conclusion – our recommendations
As we are still analyzing the data we collected, our final recommendations will be drafted after that work is done. What we do know, however, is that we need to show editors, publishers and researchers why research integrity is important and work with them to find ways in which all the detrimental issues science is facing can be identified and ideally prevented. The ideal scenario for me personally would be that an editor association creates a template of instructions for authors covering all sciences as well as all research integrity topics currently discussed – and that this template would also be applied in scientific writing and methodology courses around the globe.
Although the majority of the problems occur at the stage when the research is being done, not at the time it’s being submitted to a journal, journals still need to make it clear that detrimental research practices are unacceptable and do their best to check for them, handle them and alert readers when suspicions or discoveries about them arise. And they should strive to communicate with researchers, authors and the public in a timely matter.
More about the project
- Analyze references to publication ethics, research integrity and transparency policies in existing journal author instructions.
- Conduct a systematic review of previous studies that have analyzed journal author instructions.
- Survey editors, authors and reviewers to understand perceptions and attitudes towards transparency and responsible conduct of research, and differences between those perceptions based on their roles.
- Make (evidence-based) recommendations about how publishers and journals can implement publication principles and foster the integrity and transparency of research.
You can find the full project description and accompanying data files on the project’s Mendeley Data page.
- Dr. Mario Malički and Prof. Gerben ter Riet: Amsterdam UMC, University of Amsterdam, Department of General Practice, Academic Medical Center, Amsterdam, the Netherlands
- Associate prof. Ana Jerončić: Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
- Dr. IJsbrand Jan Aalbersberg, SVP of Research Integrity, Research Product, Elsevier, Amsterdam
- Prof. Lex Bouter: Department of Philosophy, Faculty of Humanities, Vrije Universiteit, Amsterdam, and Amsterdam UMC, Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam, the Netherlands
The project is funded by Elsevier.