Interview with Professor Gunnar Sivertsen

Do traditional bibliometric counting indicators – e.g., Journal Impact Factors, h-index – accurately represent the scientific output of institutions, disciplines and individuals? Yes and no, according to Gunnar Siverstsen. Metrics such as these can provide an overview and insights into what is going on in science at the macro level. But they don’t reflect the real life of scientific work, which is a collaborative enterprise in which people share ideas, criticize each other to improve, and encourage each other. At the individual, or micro, level, these are the drivers of scientific progress. But metrics don’t reflect this. We need indicators that represent not only the literature, but the real life of scientific work – that is, quality, not just quantity.

Sivertsen is passionate about developing quality-oriented metrics that more accurately reflect the value of scientific work, especially given the power of bibliometric indicators to influence research and funding decisions, university rankings, and individual researchers’ prestige and collaboration opportunities. As such, he developed the Norwegian Model, which attempts to comprehensively cover all the peer-reviewed scholarly literature in all areas of research in a single weighted indicator, making research efforts comparable across departments and faculties within and between research institutions.

Further, he is lead author on a recent study testing his team’s newly developed counting method, called modified fractional counting, which enables comparability across different areas of scientific research and co-authorship practices. Sivertsen is also the leader of strand 3 of The Centre for Research Quality and Policy Impact Studies. This is a network of organizations in Denmark, the Netherlands, Norway, Sweden and the United Kingdom with an eight-year commitment (starting in 2016) to studying the nature and mechanisms of research quality and the impact of research on society.

There is so much evaluation going on and the science system is so large now that people try to make it easier and quicker to evaluate by using indicators instead of making science-based judgments or experience-based judgments, Sivertsen says. We see this with Journal Impact Factors, which evaluate where you published instead of what you published. And there are other aspects to this issue, like the fact that there is often limited coverage in databases of the social sciences and humanities, which again affects prestige and funding.

Sivertsen is committed to continuing to innovate and help develop best practices for evaluating research as a member of the ICSR. To me, this is the reason we are launching this new center – to reflect and try to closely match the real practice of research with the indicators we believe reflect that practice, he says. I think we would all agree that right now, they are not perfectly matched.

portrait-photo-of-Dr-Gunnar Sivertsen | Elsevier

Gunnar Sivertsen is Research Professor and Head of Bibliometric Research at the Nordic Institute for Studies in Innovation, Research and Education (NIFU) in Oslo, Norway. He earned his doctoral degree in Scandinavian literature in 2006 from the University of Oslo.