E-mail Updates Sign Up for E-mail Updates

How is research assessment changing? Watch video highlights from “Transforming Research" conference

15 experts share their thoughts on mapping science, quantifying uncertainty, and understanding the path dependence of a research field

Transforming-Research-image.jpg
The Transforming Research conference was livestreamed. Watch videos of the presentations below.

What changes might we want to make to the way we assess research? Which practices are driving those changes, and what tools are available to support all the various actors in the research ecosystem?

The following insights stem from a group of researchers, funders, librarians, publishers and technologists who gathered in Baltimore last month to examine these and other issues. Their insights yield lessons that can be put into practice whether you're a humanities scholar, poet or scientist.

To make the event accessible to the broadest possible audience, we streamed it via Facebook Live. Here, we’re sharing the sessions in five parts, with markers to help you jump to a specific session of interest, listed as time offsets from the end of the video. Slides will be made available at the conference website for those of you who want to take a closer look. Sound quality is reasonably good, but do let us know in the comments if something’s not clear, and we’ll do our best to make it clear.


Welcome + Session 1

In the first session, we learned how societal demand translates into research publication output and how an early-career researcher thinks about and uses the research assessment tools he or she stumbles upon. We also got a fascinating look at using natural language processing (NLP) to recursively parse statements in research papers and show not just what is known, but what is unknown in a research field.

Dr. Ann Seymour of Johns Hopkins welcomed the group and I introduced the first panel, featuring Dr. Richard Klavans from mapsofscience.com and Dr. Kiarri Kershaw from Northwestern University Feinberg School of Medicine. The pairing of the 30,000 foot view of science from Dr. Klavans and the view of science "in the trenches" from Dr. Kershaw made for an illuminating comparison! Dr. Klavans showed us that a systematic view of science can predict where funding is going to be, but an early-career researcher such as Dr. Kershaw won’t necessarily see this if they are focused too narrowly on their own field.

After an extended discussion with the two speakers and the audience, Chaomei Chen of Drexel spoke about mapping uncertainty in scientific topics. This could lead in a number of interesting directions, such as towards a way to weight citations relative to one another, rather than just counting them all equally, which would yield a fairer assessment of research. The thought of any sort of quantitative assessment of research makes some people feel uneasy, and we explored why in the final session where philosophers discussed “metrics as a system of governance” and “the mindless way the math is done.”

Session 1: Assessing broader impact and innovation across the disciplines

Speaker Time (from end of video)
Johns Hopkins Welcome -3:05:44
Panel introduction from William Gunn -3:04:17
Dick Klavans -2:54:30
Kiarri Kershaw -2:32:13
Panel Discussion -2:19:44
Chaomei Chen -1:06:09
Chaomei Q&A -18:18

Session 2: Making metrics work better for decision-makers

Speaker Time (from end of video)
Introduction from Stacy Konkiel -2:55:47
Kari Wojtanik -2:44:30
Stacy Konkiel -2:22:37
Kari & Stacy Q&A -2:03:10
George Santangelo -1:31:20
George Q&A -33:54

Some highlights from session two are the difference between how a funder and a researcher think about evaluating research, Stacy Konkiel’s discussion of values, and the neat visualization of research translation by the iTrans tool in George Santangelo’s talk. Kari Wojtanik led us through how Komen thinks about assessment and makes funding decisions, then Stacy gave a report from the HuMetrics Initiative, which is an exploration of the values of scholarship in the humanities and social sciences. Following a discussion with the audience, George, from the NIH Office of Portfolio Analysis, showed the work they are doing, including development of a metric, the Relative Citation Ratio, and the iSearch, iCite, and iTrans tools they use to evaluate research.


Session 3: Metrics and precision medicine

Session three was a set of demos from toolbuilders, focused on the theme of supporting precision medicine. We saw demos from Altmetric, Clarivate, Plum Analytics, Kudos, Uberresearch and Scival.

Speakers and tools Time (from end of video)
Introduction from Ann Seymour -1:25:17
Altmetric -1:19:10
Clarivate -1:04:40
Plum Analytics -49:14
Kudos -34:28
UberResearch -20:41
SciVal -11:51

Session 4: Developing and assessing broader research impacts – examples from the community

The focus on this session was the library community. Don’t miss Aaron Sorensen from the Digital Science Consultancy at -45:19 showing what you can learn from the differences in altmetrics profiles of researchers with similar citation profiles. Heather Coates spoke about gathering values-driven evidence to support researchers and Fatima Barnes described the impact of an institution-wide Journal Club at Howard University. Robyn Reed discussed how they do impact assessment at Penn, Patty Smith from Northwestern showed the qualitative side of things, with a focus on storytelling in science, and Michael Bales from Weill Cornell Medicine showed us a Drupal module that builds an institutional dashboard for researchers by incorporating publication metadata.

Speaker Time (from end of video)
Introduction from Kristi Holmes -1:12:28
Heather Coates -1:09:00
Fatima Barnes -56:51
Aaron Sorensen -45:19
Robyn Reed -33:20
Patty Smith -20:35
Michael Bales -7:46

Sessions 4 Q&A + Session 5: The map versus the territory: exploring the relationship between research and metrics

Worried about the use of quantitative approaches to research metrics? This is the discussion for you! Three philosophers discuss research metrics from what is essentially a harm-reduction framework. They give practical tips for reducing the harm caused by poorly designed metrics, while recognizing that reading and understanding the research in context is the ideal approach for many assessment use cases. In this session, there was an extended discussion of why we choose the methods and data sources we choose and what aspects we should consider when developing an assessment program. Robert Frodeman showed us how metrics are as much a system of governance as they are a description of the research landscape, Prof. J. Britt Holbrook had us focus on building tools that actually serve researchers, and Professor Steve Fuller emphasized the idea of path-dependence in research, showing us how who comes first in research establishes a biasing effect that will be reflected in metrics if they take a simplistic approach, such as simply counting the numbers of citations. Because the three philosophers knew each other & were able to play off one another, they gave the extended discussion a lively energy.

Speaker Time (from end of video)
Session 5 start -1:41:03
Bob Frodeman intro -1:40:33
Britt Holbrook -1:36:15
Steve Fuller -1:25:00
Bob Frodeman -1:12:27
Panel discussion -1:04:27

Transforming Research

Transforming Research (@TransformRes) reached 100 researchers, funders, policymakers, librarians, publishers, and toolmakers in Baltimore and 5,000 viewers online. The organizers were William Gunn, Kristi Holmes, Stacy Konkiel, Ann Seymour, Anne Stone and Michael Taylor.

What you said on Twitter

Tags


Contributors


Comments


comments powered by Disqus