Rigorous scholarly work requires periodic assessment of our underlying 
assumptions. If these are found to be incorrect, then any logical arguments or 
empirical work based on these assumptions should be questioned.


Assumptions underlying metrics-based evaluation include:

  1.  impact is a quality of good scholarship at the level of individual works
  2.  aiming for impact is desirable in scholarly work

Let's consider the logic and an example.


  1.  Is impact a good thing? Consider what "impact" means in other contexts. 
Hurricanes and other natural disasters have impact; when we seek to work in 
harmony with the environment, we try to avoid impact. "Impact" is not 
essentially tied to the quality of "good".
  2.  Is aiming for impact at the level of individual scholarly works 
desirable? According to Retraction Watch, one of the top 10 most highly cited 
papers includes "the infamous Lancet paper by Andrew Wakefield that originally 
suggested a link between autism and childhood 
vaccines<http://retractionwatch.com/2011/01/06/some-quick-thoughts-and-links-on-andrew-wakefield-the-bmj-autism-vaccines-and-fraud/>"
 (from: 
https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/).
 This article has been highly cited in academic papers both before and after 
retraction, widely quoted in traditional and social media, and I argue can 
demonstrate real-world impact (in the form of the return of childhood diseases 
that were on track to worldwide eradication) that is truly exceptional. Any way 
you measure impact, this article had it. Could this be a fluke? I argue that 
there are logical reasons why this is would not be a fluke. When researchers 
are rewarded for impact, this is an incentive to overstate the conclusions, see 
positive and interesting results beyond what the data shows, and even outright 
fraud.

It is important to distinguish the consequences of impact at the level of an 
individual research work and scholarly consensus based on a substantial body of 
evidence (such as climate change).

It is also important to consider some of the implications of metrics-based 
evaluation on individual scholars. Social biases such as those based on gender, 
ethnic origin, and Western centrism are common in our society, including in 
academia. There is some recognition of this is traditional academic work and 
some work to counter bias (such as blind reviews), however this cannot be 
controlled in the downstream academic environment and it seems obvious that 
metrics that go beyond academic citations will tend to amplify such biases.

Evaluation of the quality of scholarly work does not require metrics. Anyone 
who is a researcher needs to do a great deal of reading and assessment of 
scholarly works. Professors read and grade papers and theses. When I evaluate 
dossiers for scholarships or grants or tenure and promotion committees, I read 
and evaluate the works.

The University of Ottawa has what I consider a good, non-metrics-based approach 
to evaluating research. Although it was written some time ago, it is still 
leading-edge. To obtain promotion and tenure, for example, a professor needs to 
demonstrate that they are contributing a sufficient amount of original research 
beyond their dissertation. It is recognized that there are many different kinds 
of knowledge generation. A scientist may publish journal articles; a professor 
in theatre may accomplish innovations in production of plays. There is no need 
to add preprints; this is already covered. If you know of other good non-metric 
models for evaluation, please share with the list.

This e-mail is a brief piece on a topic that I've written about in quite a bit 
more detail. Anyone who has the time is invited to read this book chapter in 
the process of publication that I wrote: "What counts in research? Dysfunction 
in knowledge creation & moving beyond". In addition to a critical view of 
metrics-based evaluation (traditional and altmetrics), readers may be 
interested in learning about how metrics feed into university rankings and the 
growing role of Elsevier in this space. When the book is published, I'll refer 
to the work of fellow authors for an explanation of the problems associated 
with university rankings per se.

http://hdl.handle.net/10393/39088

best,

Dr. Heather Morrison

Associate Professor, School of Information Studies, University of Ottawa

Professeur Agrégé, École des Sciences de l'Information, Université d'Ottawa

Principal Investigator, Sustaining the Knowledge Commons, a SSHRC Insight 
Project

sustainingknowledgecommons.org

heather.morri...@uottawa.ca

https://uniweb.uottawa.ca/?lang=en#/members/706

[On research sabbatical July 1, 2019 - June 30, 2020]
_______________________________________________
GOAL mailing list
GOAL@eprints.org
http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal

Reply via email to