Elsevier Connect

Open Access Week 2013

Redefining impact: Get credit for ALL your scholarly work

The Web – and altmetrics – present new opportunities for assessing new types of scholarly output

The Author

William Gunn, PhDDr. William Gunn (@mrgunn) is Head of Academic Outreach for Mendeley, a research management tool for collaboration and discovery.

Dr. Gunn attended Tulane University as a Louisiana Board of Regents Fellow, receiving his PhdD in Biomedical Science from the Center for Gene Therapy in 2008. His research involved dissecting the molecular mechanism of bone metastasis in multiple myeloma and resulted in a novel treatment approach employing mesenchymal stem cells, the body's own reparative forces.

Frustrated with the inefficiencies of the modern research process, he left academia and established the biology program at Genalyte, a novel diagnostics startup. At Mendeley, he works to make research more impactful and reproducible and is an expert on altmetrics, reproducibility and open access.[divider]

Let's say you're a researcher and you're looking for the latest developments in the area of intracellular membrane trafficking, a field which you believe may have some applications to your field of study. How do you find out what the latest discoveries are?

You could try reading papers from well-known authors (if you know who they are), or you could find a recent review (if there is one), or you could see if anything as been published recently in a top journal like Cell, Nature or Science.

Since Eugene Garfield developed the idea of the Journal Impact Factor in the 1960s, it has become the de facto standard for assessing research impact. But the Web presents new opportunities for looking at much richer article-level data as well as assessing new types of scholarly output such as datasets and software.

These are all good approaches, but they're far from systematic and leave you wondering what you've missed, so you try a search on PubMed. You get thousands of results and it's not immediately clear which ones you can even access, let alone which ones best reflect the scientific consensus.

Next, you try Scopus, sorting your results by the number of citations. This works a little better, but papers published this year often haven't yet accumulated many citations, and to make matters worse, you find that many of the highly cited papers just aren't very good when you take a closer look and you suspect some got through peer review on the strength of the author's name or institution rather than technical merit.

If this describes a situation you've often found yourself in, you're in good company. Not just researchers but funders and publishers often struggle to identify quality papers that suggest that an early-career researcher should be given a shot with some exploratory funding or that a paper is likely to appeal to their audience. This is the problem addressed by a new way of looking at research metrics, called altmetrics.

Altmetrics: you're more than just a number

Altmetrics are essentially an application of what the rest of the Web uses to determine which pages are most likely to be relevant to a website's audience or to an individual in a scholarly context. If Amazon.com can tell you what phone you'd most likely want to buy, and Google Analytics can tell you what stories your audience is most interested in, why shouldn't there be a service that can tell you what papers you'd be most interested in reading or publishing?

The good news is that there can be, but we have some work to do to get it right and to expand our definition of impact beyond a paper that is highly cited by other research papers.

[caption align="alignright"]Impact may not have 31 flavors, but new data shows it has more than just one. (Photo by Ian Mackay via Flickr) Impact may not have 31 flavors, but new data shows it has more than just one. (Photo by Ian Mackay via Flickr)[/caption]First, data on which papers have been cited by which needs to be available, under licensing terms (such as CC-BY or CC0) that enable use and reuse of this data by the many groups which alone could only solve part of the problem for their niche, but together as part of an innovation ecosystem could contribute to a much more powerful and more general solution.

Second, we need to study the various signals indicating attention and impact on the Web, including those from academic collaboration tools like Mendeley, repositories such as Dryad and Figshare, and other sources such as blog posts, Twitter and Github. We still don't understand why people write blog posts or Tweet about academic papers, but over time and with collaboration with the bibliometrics and webometrics communities, we'll learn how to understand what these signals mean and how they're best used to filter or assess the research mentioned in them.

Finally, we need to capture all these metrics in a standardized format which makes them maximally useful to anyone who wishes to use them.

Here's what we know so far:

All the above represent very good progress towards a new definition for impact that better reflects the ways scholarly communication happens today and which enables the Web to transform and accelerate research, just as it has touched so many other parts of our lives.  We're not there yet, as there are still some challenges to ensuring altmetrics are used with care and competence. And the Journal Impact Factor does have some value as an indication of journal influence (as discussed more in Mendeley and Elsevier's respective views on DORA), but I'm personally very excited about what lays ahead.

I hope you join me in celebrating Open Access Week and its potential for providing expanded free, immediate online access to the results of scholarly research, and the right to use and re-use those results as you need.



comments powered by Disqus

Related Articles