์ฃผ์š” ์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

๊ท€ํ•˜์˜ ๋ธŒ๋ผ์šฐ์ €๊ฐ€ ์™„๋ฒฝํ•˜๊ฒŒ ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์˜ต์…˜์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ตœ์‹  ๋ฒ„์ „์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๊ฑฐ๋‚˜ Mozilla Firefox, Microsoft Edge, Google Chrome ๋˜๋Š” Safari 14 ์ด์ƒ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๊ฐ€๋Šฅํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ง€์›์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ํ”ผ๋“œ๋ฐฑ์„ ๋ณด๋‚ด์ฃผ์„ธ์š”.

์ด ์ƒˆ๋กœ์šด ๊ฒฝํ—˜์— ๋Œ€ํ•œ ๊ท€ํ•˜์˜ ์˜๊ฒฌ์— ๊ฐ์‚ฌ๋“œ๋ฆฝ๋‹ˆ๋‹ค.์˜๊ฒฌ์„ ๋ง์”€ํ•ด ์ฃผ์„ธ์š”ย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ

Elsevier
์—˜์Šค๋น„์–ด์™€ ํ•จ๊ป˜ ์ถœํŒ
Connect

What determines whether a research article is cited in policy?

2023๋…„ 10์›” 17์ผ

์ €์ž: Linda Willems

Basil Mahfouz of UCL gives a presentation about his research with Elsevier's International Centre for the Study of Research (ISCR)

Researcher Basil Mahfouz hopes the answer will help him design a powerful tool for policymakers

How do governments select the research evidence they use to guide their policymaking โ€” particularly when responding to a global crisis?

The obvious answer is that they draw their inspiration from current or highly cited publications. However, a new study by UCL (University College London) researcher Basil Mahfouzย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ suggests the process is not so clear cut.

Basil is a third-year PhD student in UCLโ€™s Department of Science, Technology, Engineering and Public Policy (STEaPP)ย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ. His PhD project is supported by Elsevierโ€™s International Center for the Study of Research (ICSR) and explores how research impacts society.

As part of that work, Basil recently conducted a case study with PhD supervisors Prof Sir Geoff Mulganย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ and Prof Licia Capraย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ tracking the impact of research on education policy during COVID-19. As Basil explained:

Basil Mahfouz

Basil Mahfouz

The pandemic resulted in measures, such as school closures, that disrupted learning for more than 1.5 billion students. And education policymakers worldwide discussed these measures in thousands of policy documents, most of which referenced academic research.

He added:

There were 450,000 scholarly papers published about COVID-19 between March 2020 and December 2022. To put that in context, thatโ€™s almost as many as all the research papers on climate change ever published. So, we were interested to see how well education policymakers took advantage of that vast amount of literature.

What they discovered surprised them.

Key findings

1. Policymakers cited โ€œolderโ€ research papers more often than newer findings.

Basil and his co-authors focused on polices issued between March 2020 and December 2022, containing recommendations or comments on COVID-19 measures for educational institutions. They found that more than 75% of the peer-reviewed papers cited in these policies were published prior to 2020. Using natural language processing, they also established that for 48% of those older research papers, there were newer articles available with comparable abstracts. Basil pointed out: โ€œThe average was 70 new papers for every old paper, although itโ€™s worth noting that some papers had a lot while others had very few.โ€

Chart: Quality of older cited papers vs newer uncited papers

2. The policies were more likely to cite recent medical research than recent education research.

While only 20% of the education research cited in the policies was published later than 2020, that figure rose to more than 80% for medical research. Basil said: โ€œThe medical research findings arenโ€™t surprising given that there was no research on COVID prior to the pandemic. However, thatโ€™s a very low percentage for education.โ€

3. Policymakers were more likely to cite sources from their own countries.

According to Basil, in the case of education research, this is partly attributable to the fact that a high proportion of it is localized. He added: โ€œWe also carried out a comparative analysis of research usage between the USA, the UK, the European Union and IGOs (intergovernmental organizations involving two or more nations). We found that only 0.62% of citations were shared across all four jurisdictions. Scientific articles referenced by IGOs were the most ubiquitous, accounting for over 10% percent of citations across all other policies.โ€

4. There was a weak relationship between inferred research excellence and citations in policy.

Basil divided the papers into two groups: those that were cited in policy and those that were found to be relevant to the policies but not cited. For each paper, he checked proxies of research excellence such as field-weighted citation impact (FWCI), the average h-index for all co-authors, and the CiteScore of the journal in which it was published.

Chart The average h-index of authors for papers published during 2021 and 2022 which were found to be relevant to the policies but were not cited (blue), and papers published prior to 2020 that were cited in the policies (orange).

The average h-index of authors for papers published during 2021 and 2022 which were found to be relevant to the policies but were not cited (blue), and papers published prior to 2020 that were cited in the policies (orange).

โ€œWe found that, on average, the older papers performed better across all indicators,โ€ he said. โ€œHowever, overall, the link between the indicators and policy citations was relatively weak.โ€

According to Basil, this may be due to the fact that during the pandemic, under-pressure policymakers reverted to โ€œresearch they were already familiar with or articles published by people they trusted.โ€

The study authors also checked for a link between research accessibility and policy citations. โ€œInterestingly, we found that most papers cited in policies werenโ€™t open access,โ€ Basil noted. โ€œI think there are multiple reasons for that; for example, open access papers were not as prevalent 10 years ago as they are today.โ€

Combining data to gain new insights

For the project, Basil worked in collaboration with Elsevierโ€™s ICSR Labโ€” a cloud-based computational platform that researchers can use to analyze large structured datasets, including those that power Elsevier solutions such as Scopus and PlumX Metrics โ€” and a partnership with Overtonย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ, a global database of policy documents and their citations.

Overton supplied the authors with a list of 2,452 polices that fit their criteria. The policies spanned 49 countries in total, although the majority (86%) were from the US, the UK and the EU.

Collectively, the policies included circa 24,000 scholarly citations, which after deduplication resolved to around 12,000 unique papers. Basil used the DOIs (digital object identifiers) of these papers to match them to datasets held in the ICSR Lab, which gave him the metadata for 8,818 of the policy-cited articles.

After establishing that many of the policies cited older papers, Basil and his co-authors looked to see whether there were newer papers (post-2020) on the same topic that policymakers could have used.

To do this, they turned to SciVal Topic Prominence. Basil explained: โ€œSciVal assigns each paper to one of 96,000 different topics; these are collections of publications with a common intellectual interest. We matched 2,589 of the cited papers to a topic. And then we performed natural language processing on their abstracts to discover whether there were other, similar papers on that topic that were published during 2021 and 2022. We discovered that for 48% of these older papers, there were newer articles on the same topic available.โ€

According to Basil, the involvement of the ICSR Lab was pivotal to the project: โ€œWorking with the Lab offers you flexibility โ€” you can easily shift or adapt your project as you make fresh discoveries. And while other datasets might contain higher overall counts of papers, they also come with limitations; for example, you donโ€™t only get peer-reviewed papers, and itโ€™s not easy to identify what is what.

โ€œWith the ICSR Lab, I know that all the papers Iโ€™m analyzing were published in peer-reviewed journals with good processes because theyโ€™ve passed Scopusโ€™ journal selection criteria. And alongside the standard publication metadata, I get a lot of additional information. This allows me to slice and dice the data in different ways and opens up avenues of more nuanced analysis.โ€

He added: โ€œThe platform isnโ€™t just about data โ€” it also provides computing power. And that was important for us given the huge datasets we were analyzing.โ€

Elsevierโ€™s Dr Andrew Plume, President of the ICSR and a supervisor for Basilโ€™s study, said: โ€œThe project and its first tranche of insights into the dynamics of public policymaking in times of crisis have been illuminating to us here in the center.โ€ He added:

โ€œThis work demonstrates the value of connecting the Overton policy database with Scopus publication data to ask questions not previously amenable to analysis at scale. With a rising desire across academia and government to understand and evaluate the contribution of research to society, this ability to connect the pathways from research to societal impact will become ever more critical."

Portrait photo of Andrew Plume

AP

Andrew Plume, PhD

Elsevier์˜ President, International Centre for the Study of Research (ISCR)

Next step โ€” building a new tool for policymakers

Basil aims to continue refining the model heโ€™s built for this case study: for example, by adding a comparative analysis between countries and continuing to optimize the parameters. He also plans to run new case studies on key policy topics such as climate change. He explained:

The goal is to create a pipeline, a model that you can slot different topics into for analysis. Iโ€™d like to turn this into a tool that could potentially support two use cases.

The first of these is providing governments with data for funding decisions.

โ€œEach year, government agencies globally invest around $600 billion into research. Thatโ€™s public money meant to help improve the research ecosystem and also benefit the public,โ€ Basil said. โ€œBut what are they funding? How are decisions made? A tool like this would provide them with data to guide decisions about where to invest for more evidence-based public policy.โ€

Basil also sees an opportunity to use it as a diagnostic tool to track exactly how research is making its way into policy. โ€œYou can benchmark policies, funders and institutions. You can help policymakers understand how well they are absorbing knowledge and where they need to improve. This will be fairer for researchers. At the moment, there is an assumption that if researcher A has more policy citations than researcher B, the formerโ€™s research is probably better. But perhaps researcher A is in a country where policymakers are better at finding and absorbing research. We could account for that and provide more context-sensitive indicators.โ€