Research Evaluation & Funding

Impact of science: the need to measure

As research becomes more international, the use of metrics to measure the economic and societal impact of science has gained in importance

Print Friendly and PDF
Share story:  

Since the onset of the global economic crisis and the subsequent pressure placed on public finances, there have been growing calls for science to show its return on investment — given that much of the research enterprise is directly or indirectly funded from the public purse. At the same time, questions have been raised about the social and economic good that science is contributing to society. As a consequence, the use of metric methodologies which measure the economic and societal impact of research and scholarship has increasingly gained in importance.

The internationalization of research

The rise of international collaboration in research has progressed unabated in recent years – driven in part by the availability of low-cost travel and even lower-cost telecommunications via the Internet – that it is in itself a shining example of the economic and societal impact of research. At The Impact of Science conference in June, many stakeholders representing different perspectives came together to discuss metrics that attempt to convey the economic and societal impact of research and scholarship.

One of my key takeaways from that conference was the question of how we can minimize the burden of collating the evidence of research impact whilst also ensuring that the data is rich enough to reflect the various ways research outcomes can have an impact. This is why it is necessary to put a governance structure in place that ensures that the assessment of impact is done in a systematic way – even within the context of a single institute, country or research discipline. However, the most critical element to be considered in the future measurement of impact is the engagement of researchers themselves.

The role of metrics in research assessment

The Impact of Science conference

Impact of Science conference

For the second consecutive year, policymakers, academics and industry professionals convened in Amsterdam in June for The Impact of Science conference, organized by ScienceWorks in cooperation with Elsevier. Participants included experts from academia, government and the corporate sector to discuss topics related to science policy and research management. 

"Inspired by the success of last year´s conference, we decided to continue facilitating this very important discussion," said Petra Ullrich, Elsevier's Marketing Director for Europe. "Exchanging ideas around research assessment and the impact on society are important topics. We started similar initiatives in other countries of Europe, and we will continue to focus on such initiatives in 2015 as well."

Find more information on the Impact of Science website.

Metrics have been a divisive issue in many research communities. Citation-based research metrics were first developed in the post-war era in the fields of biochemistry, genetics and closely allied life sciences. They subsequently spread to those fields where the research culture had shifted away from rapid and frequent publication and citation in peer-reviewed journals, such as in the social science and humanities. The perceived inevitability of metrics-based evaluation being used in the social sciences and humanities by major research funders, such as the European Commission, has led to a number of analyses of the present metrics landscape and the applicability of these metrics across research disciplines. As a result, recent thinking on metrics has moved from a supply-side model, which uses the metrics most readily calculated from data available, to demand-side, which involves considering the purpose of the measurement and creating metrics that most closely match the need.

Snowball Metrics as an indicator of strengths and weaknesses

The recent Snowball Metrics initiative, which began in the UK but is now gaining traction worldwide, is an excellent case in point. The academia-industry collaboration, which was initiated by some of the most research-intensive universities in the UK, including the University of Oxford and the University of Cambridge, along with Elsevier, aims to introduce metric methodologies which can enable institutional benchmarking on a global scale. Snowball Metrics thus strives to develop clearly defined metrics in close collaboration with the research community in order to help universities establish institutional strategies on the basis of their research performance.

The output of Snowball Metrics is a set of mutually agreed and tested methodologies, called "recipes," which are available free of charge and can be used by anyone for their own purposes and within their own business models. In June, a second edition of the Snowball Metrics Recipe Book was published that includes 14 recipes relating to factors such as collaboration, societal impact, intellectual property and spin-offs.

The Snowball Metrics landscape

University research executives need metrics for all the research activities in which their institution invests resources and would like to excel. Therefore, representatives at major research institutions agreed to use Snowball Metrics, which include all activities and an additional set of denominators that can reveal research strengths on a more granular level or which helps to normalize in reference to size. This is a chart of the factors used in the Snowball Metrics.

<strong>The Snowball Metrics Landscape</strong>

Related resources

Elsevier's response to HEFCE´s call for evidence

Response to HEFCE's call for evidenceThis document describes Elsevier's position on the use of research metrics in research assessment by laying out 12 guiding principles. It is a response to the call for evidence issued by the Higher Education Funding Council for England (HEFCE) to collect evidence regarding the role and usefulness of metrics in research assessment and management across different academic disciplines. 

Download it here.


Elsevier Connect Contributor

Andrew Plume, PhDDr. Andrew Plume specializes in scientometrics (the scientific qualification and analysis of science). Through accumulating a broad spectrum of data that ranges from specific primary sources such as authors and single articles to the broadest data resources generated from countries and entire subject domains, Dr. Plume studies information flows in scholarly literature by analyzing patterns of publications and citations. One of his particular interests is the emergence of alternative metrics for research evaluation. Dr. Plume frequently presents these topics, among others, to journal editors, learned and scholarly societies, and the publishing community.

After receiving his PhD in plant molecular biology from the University of Queensland, Australia, and conducting post-doctoral research at Imperial College London, Dr. Plume joined Elsevier in 2004. He has co-authored research and review articles in the peer-reviewed literature and is a member of the editorial board of Research Trends.

comments powered by Disqus

Share story:  

Related Stories