Evaluating research: Who gets the money when the stakes are high?

Research leaders confront challenges in deciding what to fund for the greatest societal impact

Kari Wojtanik presentation
Kari Wojtanik, PhD, Senior Manager of Evaluation & Outcomes at Susan G. Komen, presents at Elsevier’s Research Funders Summit. (Photos by Alison Bert)

When Dr. Kari Wojtanik came to Susan G. Komen – the largest nonprofit funder of breast cancer research outside the US government – seven years ago, she faced an immediate challenge. “We were getting a lot of questions from our stakeholders asking us if our research had contributed to a breakthrough in breast cancer,” said Dr. Wojtanik, Senior Manager of Evaluation & Outcomes. “And this was a challenging question for us to answer. It’s a challenge for many funders.”

The problem soon became apparent: “We needed to better understand what areas of research we had funded.”

The first step was to sort through paper files dating back to 1982, when the organization was founded, and combine them with more recent electronic files to create a grant “database”. That led the team to eventually create a system to better assess the impact of research they had funded and track progress towards organizational research goals.

“Over the next several years, we spent time building our evaluation tools and our evaluation program and approaches,” she said.

Evaluating research can be fraught with complexity, and the process is likely to differ from one organization to the next. It’s also a high-stakes process because it determines which research gets funded and ultimately becomes available to the public. That can mean a new treatment for cancer or heart disease or a more efficient means of energy production.

The complexity of research evaluation became apparent at Elsevier’s Research Funders Summit when research leaders from three very different organizations, including Susan G. Komen, shared the challenges they face in evaluating research and the unique approaches they use to measure the impact it has – or is likely to have – on society.

“Funders require an increasing amount of narrative input on their applications,” explained moderator Andrea Michalek, VP Research Metrics at Elsevier. “They’re looking for recent and relevant information on the impact of research. That’s not always easy to do, and different disciplines will want to show their impact in different ways. Having access to a broad range of metrics can help researchers understand what to emphasize in their grant applications.”

The desire to provide that range of research metrics and analytics was behind the company she founded, Plum Analytics, which joined Elsevier two years ago. One of their tools, PlumX Metrics, gathers research metrics for all types of scholarly research, showing how people are interacting with it online – including mentions in the news and on social media.

“There is no one-size-fits-all metric.”

Dr. Moody Altamimi, Director of the Office of Research Excellence at Oak Ridge National Laboratory (ORNL) and Dr. Sherry Sours-Brothers, Manager of Research Outcomes for the American Heart Association.

Of course, measuring the impact of research in any complex, wide-ranging system has its challenges.

“The first challenge is that there is no one-size-fits-all metric,” said Dr. Moody Altamimi, Director of the Office of Research Excellence at Oak Ridge National Laboratory (ORNL) in Tennessee, a US Department of Energy national laboratory. One factor that differentiates research groups is their output, she explained: “Some groups generate publications, some generate patents and others generate software and datasets.”

ORNL has its origins in the Manhattan Project of World War II and has grown over its 75-year history into a leading center for research in biology, chemistry, material science, nuclear science, physics and engineering. Researchers come from around the globe to use the laboratory’s state-of-the-art facilities, which include Summit, currently the world’s fastest supercomputer, and the Spallation Neutron Source, one of the world’s top resources for neutron scattering analysis of materials.

An important undertaking for Dr. Altamimi’s group is to gauge impact across a diverse landscape of research activities.

“The ultimate goal is to develop an impact value chain that illuminates the relationship between inputs, activities and impacts,” she explained. “It should include the lifecycle from funding through research results all the way to impacts on a scientific field, economic competitiveness, or societal well-being.”

Measuring the value of funding early-career researchers

Dr. Sherry Sours-Brothers, Manager of Research Outcomes for the American Heart Association, talks about her organization’s approach to research evaluation.

The broad array of research programs also poses a challenge at the American Heart Association (AHA), which funded $160 million in research this past fiscal year.

“It makes it hard to design specific metrics for each program,” said Dr. Sherry Sours-Brothers, Manager of Research Outcomes for the AHA. The AHA program evaluation framework considers knowledge generation and expansion of research capacity to assess funding impact and to inform portfolio management decisions.

A priority for her organization has been to support career-stage development; they commit 65 percent of their funding to early career researchers – from undergraduates to assistant professors on their first faculty appointment.

“We have a lot of challenges to demonstrate through our metrics that we’re really getting the best bang for our buck from that commitment,” she said. That challenge starts with keeping track of researchers after they complete their AHA projects to see if they’re still doing research in their field.

AHA is also increasing its focus on team science and collaborative projects while bringing in researchers from other fields, Dr. Sours-Brothers said. For example, biomedical engineers working on nanotechnology on a project funded by NASA are now applying that technology to atrial fibrillation.

Developing metrics to measure societal outcomes

Kari Wojtanik, PhD, of Susan G. Komen presents on the Evaluating Research panel. Seated are moderator Andrea Michalek of Elsevier, Sherry Sours-Brothers, PhD, of the American Heart Association and Moody Altamimi, PhD, of Oak Ridge National Laboratory.

At Susan G. Komen, Dr. Wojtanik’s efforts to evaluate her organization’s research program led her team to develop a robust system to track the research they fund and measure its impact on the field of breast cancer.

They started tagging grants by topic areas. After that, they did portfolio analyses to assess what areas of research they were funding – for example, metastases or immunotherapies – and the impact of grants on awardees and the field of breast cancer.

Impact evaluation proved to be especially challenging. “I think a lot of organizations struggle with this question,” Dr. Wojtanik said. “Part of the issue is that the metrics we have to measure this impact are really just proxy measures; they center around what we often refer to as institutional outcomes: publications, citations, follow-on grants, did the researcher progress in their career?

“So we wanted to start thinking about ways we could develop feasible metrics that could help us measure outputs centered on societal outcomes – what is the impact to health or policy? – those research results that are really practice-changing.”

Two years ago, Komen announced a new goal for the organization: to reduce breast cancer deaths by half in the United States by 2026. For its research programs, the goal would be to find breakthroughs for incurable breast cancers in three areas: develop improved technologies for early detection, develop new treatments for aggressive subtypes of breast cancer, and develop strategies to prevent and treat metastatic breast cancer.

Her group then set out to measure their progress against these research goals.

This chart tracks Komen-funded research along the research pipeline. (Source: Susan G. Komen)

They asked a series of questions:

  • How can we measure progress towards those three focus areas?
  • Can we measure whether our research is progressing along that research pipeline?
  • Can we measure its progress from basic research to clinical testing, where it could be impacting and reducing breast cancer deaths?

To answer these questions, they developed another classification system that tags grants by their product potential and their stage in the research pipeline. “In this way,” Dr. Wojtanik said, “we can really monitor the progress of a grant and its associated products from basic research to clinical testing.”

Susan G. Komen’s internal classification and tracking system classifies grants by product potential and stage in the research pipeline.

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Comments


comments powered by Disqus