Saltar al contenido principal

Lamentablemente no somos totalmente compatibles con su navegador. Si tiene la opción, actualice a una versión más reciente o utilice Mozilla Firefox, Microsoft Edge, Google Chrome o Safari 14 o posterior. Si no puede y necesita ayuda, envíenos sus comentarios.

Agradeceríamos sus comentarios sobre esta nueva experiencia.Díganos qué piensa se abre en una nueva pestaña/ventana

Elsevier
Publique con nosotros
Connect

How reproducibility is gaining first-class status in scientific research

3 de mayo de 2024

Por Milly Sell

Photo of Dr Ben Marwick, Professor of Archeology at the University of Washington and Associate Editor at the Journal of Archeological Sciences.

Ensuring research is reproducible is crucial to instilling confidence in research. Prof Ben Marwick, Associate Editor of the Journal of Archaeological Science, talks about recent initiatives that are encouraging software and data sharing.

If scientific work can be reproduced, it can be validated or built upon by others. This is why reproducibility is described as vital and underpinning trust in science se abre en una nueva pestaña/ventana. With today’s research often involving complex computational analysis, ensuring reproducibility means openly sharing any accompanying software, data and code.

This topic has been of great interest to Dr Ben Marwick se abre en una nueva pestaña/ventana, Professor of Archeology at the University of Washington se abre en una nueva pestaña/ventana in Seattle, throughout his career. Ben, whose research focuses on paleolithic archaeology in mainland Southeast Asia, explains:

I have always been interested in more quantitative applications in science. When I was first conducting complex and delicate statistical tests as a graduate student, I started to feel concerned about whether I could recalculate the exact same results if I did it again another day.

At that time, software tools to support analysis were not very user friendly. Because they required dedicated time and energy to adopt them, not many people were using them. Ben started to explore disciplines beyond archaeology to understand how others were ensuring high levels of confidence in the reproducibility of their results:

I found that some disciplines, like biology and ecology, were using programming languages, such as R and Python, within their research. Initially, this seemed like a strange thing to do — more suitable for computer scientists than research scientists. But I saw they were using it in a very meaningful way, writing their steps out clearly and rerunning them using the program to confirm the result. It seemed like a very elegant and efficient way to tackle the thing that was keeping me up at night.

As Ben started to pluck out parts of this approach that could be meaningful for archaeologists, he found others in his field with the same interest. Not only that, but he found some scientists were already working in a very transparent way:

I came across other archaeologists adopting programming languages to help their analysis. I found people who wanted to document their steps in code and share these files somewhere others could access it and check their work. Then we can see all the steps of their data analysis in fine detail, which makes it easier to understand how they came to their final conclusions.

Supporting a growing community

Fast forward and the community of researchers using computational tools has transformed from a small number to a substantial community. Ben attributes this partly to technological advances, particularly in the last five years:

Writing and using code used to be like speaking a foreign language. Now, it’s a lot more accessible with better support and documentation to help people find and understand how to use the software effectively and appropriately. The barriers to learning to use code and related tools have really come down a lot.

Tools may have improved but using them still requires significant effort. Ben believes there are many ways researchers can be encouraged to undertake this effort and place a greater emphasis on reproducibility throughout the publishing process:

“The reproducibility of research work needs to be recognized as an exceptional scholarly product, much like a journal article itself.”

“The reproducibility of research work needs to be recognized as an exceptional scholarly product, much like a journal article itself.”

Headshot of Dr Ben Marwick, Professor of Archeology at the University of Washington and Associate Editor at the Journal of Archaeological Science.

BM

Ben Marwick, PhD

Professor of Archaeology, University of Washington | Associate Editor, Journal of Archaeological Science

By making code and data a central part of the submission and review process, publishers confer a high level of importance on it. One way to do this is by requiring a Data Availability Statement for submissions. This statement lets readers know where they can access any digital data or code used to generate research results. Data needs to be made freely available online, when it is legally and ethically possible to do so.

Incorporating a reproducibility review

While a Data Availability Statement is a great starting point, Ben highlights an opportunity for publishers to level up further.

At the Journal of Archaeological Science (JAS) se abre en una nueva pestaña/ventana, where Ben is an Associate Editor, a reproducibility review se abre en una nueva pestaña/ventana was introduced in March as part of the submission process:

Having a reproducibility review is novel in the world of archaeological publishing. Having it built into the submission process is a big deal and ensures people will take reproducibility much more seriously.

Now, when an author submits a paper and we see mention of the use of programming language and code, we access the relevant files and run the code using their data. We see if we can generate the same results they presented in their paper.

The review is conducted by a reproducibility specialist. They then work with the author to share recommendations, suggest fixes for any errors, and highlight areas the author needs to revisit:

It’s a more collaborative process than a traditional peer review — we really want any code to work and be useful to others. Once the paper is published, we add a note at the end of the paper to inform readers that it went through an extra level of checks and verification to ensure some or all figures and tables can be regenerated by another person. This is a really high level of computational reproducibility.

For authors, going through this review process can help validate the high quality of their research work. It also creates benefits for readers:

You can have peace of mind that what you are looking at is quite robust, knowing another person, independent of the authors, has been able to go through and reproduce the same figures. If you download the code yourself, you can be confident it’s likely to be very useable.

Ben notes a very encouraging initial response from authors:

This has only been live for a short time, but the response has been very positive. Authors we’ve spoken to have been really enthusiastic about becoming involved and keen for feedback on their code.

JAS Reproducibility Prize

Prior to launching the reproducibility review process, in 2023, the Journal of Archaeological Science also introduced a reproducibility prize. Through this initiative, significant recognition has been given to the hard work being done by researchers. Ben comments:

The winning paper was selected due to being exceptional in its computational reproducibility. It gave this piece of research work a big boost of publicity and rewarded the authors’ fantastic effort that previously may have gone unnoticed.

Ben hopes these initiatives will encourage others to consider the reproducibility of their work in greater depth:

I really hope the review and the prize will help motivate researchers to spend time polishing up code so others can easily read and use it. It indicates that data and code, like the article itself, is part of a first-class research product and that people are going to look at them and use them in their own research.

I hope even more publishers see this and think, ‘This is something people care about. We should implement something similar.’

Achieving greater inclusivity through reproducibility

While publisher recognition is a fantastic support for reproducibility, Ben believes there are many other important reasons for scientists to focus on it. One of these is the chance to achieve greater global inclusivity through open data and code sharing —in much the same vein as open access publishing. Ben comments:

I have experienced how different research institutions around the world don’t have equal access to skills and training. Reproducibility creates the opportunity to share more of the work done at wealthier institutions with individuals working in smaller, more remote locations.

If researchers can make their code files and data sets available on trustworthy repositories, others can easily download, explore, analyze, and compare with their own data. This gives a degree of participation that they wouldn’t otherwise have.

He also notes some of the historic imbalance in this area:

The world of computers and software has been very male dominated in recent times, as well as having a middle-class, White, Western bias. Increased effort on reproducibility and sharing of code and data will help diminish this bias by providing more equitable access to research materials.

A number of organizations are now working to support software developers and research scientists using programming languages in historically underrepresented countries. This has included “unconferences” hosted by rOpenSci se abre en una nueva pestaña/ventana and other nonprofit organizations. Ben explains:

These are more relaxed, unconventional conferences. Attendees are chosen to widen the demographic compared to a traditional computer science or research programming meeting. These deliberate acts to change the nature of the community can also completely change the pathway of someone’s career.

Addressing barriers to sharing

Despite the many good reasons for focusing on reproducibility, the idea of open data sharing still provokes mixed reactions. Ben comments:

Often, there is the idea that the journal article is the traditional package of what is shared and everything else, like data and code, is normally kept private.

Ben attributes this to tradition, along with concerns about another taking your work and getting credit for it. This fear is particularly relevant for early career researchers:

When you are starting out, every line on a CV is hard won through blood, sweat and tears and each line can make a big difference to your career trajectory. Openly sharing work at this stage is a very delicate issue, understandably, out of fear of losing that line to someone else.

While he agrees the concern is valid, Ben believes any theft of work happens very rarely:

It’s very hard for someone to steal someone else’s data or code and quickly benefit from it. A great deal of time and energy still needs to be invested to contextualize it into a new contribution to science. I suspect it’s much less common than people think.

There are some instances when sharing data is not appropriate, particularly in the field of archaeology. Ben explains:

I published a paper a few years ago where, if we had shared the data, it would have revealed the GPS coordinates for significant cultural sites. The risk of people visiting the sites resulting in damage was too great. Protecting culturally sensitive data is one instance where it’s perfectly reasonable not to share.

There are sometimes good reasons for not making all data open. It’s about compromise — balancing justified reasons for not sharing against less well justified fears.

Making reproducibility (and programming) the norm

In the future, Ben would like to see an even greater embedding of reproducibility, made possible in part by programming skills becoming a part of scientific education:

I would like reproducibility to become a more normal part of the field. We need to be teaching students the data science tools they need from the start so they, in turn, can fulfill the emerging expectations of the scholarly community.

I would love if it got to the point that a reproducibility review could be applied to every paper, because all researchers are using a form of programming language that enables this kind of review.

He would also like to see other journals adopt a reproducibility review process:

I’m thrilled that we have a reproducibility review in place now at the Journal of Archaeological Science. This will be a great example for other journals to follow, especially those with editors that care about reproducibility and the transparency and inclusiveness that it affords. We can also help point out the individuals in their editorial community who already have the necessary skills to support the review process.

“There are so many authors out there who are already writing code that people really want to look at. Having a reproducibility review just ties a bow over all this hard work.”

Headshot of Dr Ben Marwick, Professor of Archeology at the University of Washington and Associate Editor at the Journal of Archaeological Science.

BM

Ben Marwick, PhD

Professor of Archaeology, University of Washington | Associate Editor, Journal of Archaeological Science

Contribuidor