Woman And computer
Human And Computer

Sunny with scattered papers

Measuring SuccessSeed magazine has an interesting article On Science Transfer. It is about the measurement of scientific success by means of automatized metrics, a topic we have discussed several times on this blog, see eg my posts Science Metrics and Against Measure.

The mentioned article is interesting in that it focuses on measuring scientific activities that are not usually considered for academic purposes, those of communicating science and being relevant for science policies - that's what is meant with �science transfer.� To that end, commonly used measures based on citations are of limited use:

�If we want to know what scientific ideas are influencing decisions and policymaking in the public sphere or in disparate scientific fields, rather than simply the discipline in which an idea originated, citations are of less relevance [...] Writing in the popular press is equally unlikely to garner citations. Even trying to translate research into something more digestible by a lay audience within the academic publishing world is a dead end; editorial and other journalistic material is generally deemed �uncitable.�

Though it is by no means the only aspect of scientific culture responsible, the fixation on citations as a measure of scholarly impact has given scientists few reasons to communicate the value of their work to non-scientists.�
The article then discusses the possibility of more general measures of impact, based on usage, such as for example MESUR. I am skeptic that usage is an indicator for quality rather than for popularity. Some works arguably score a lot of hits and downloads exactly because they turn out to be utter nonsense.

But either way, I certainly welcome the attempt to take note of a scientist's impact on informing the public. A few days ago, Vivienne Raper had an interesting blogpost on Science Blogging and Tenure summarizing the pros and cons of blogging next to doing research. She reports an example from innovation-country Canada:
�Cell biologist Alexander Palazzo says his blog helped him secure an assistant professorship. "My department" -- the biochemistry department at the University of Toronto in Canada -- "told me part of the reason they hired me was because of stuff I'd written on my blog," he says. "It wasn't the main reason they hired me, but it helped."�

Another item on the topic of getting science closer to the public and the role of blogging: In the last 3 months or so I received about 5 emails from freelance writers with a record of science-themed articles, asking for a guest post. As you can see I said thanks but no thanks, but I find this an interesting development. It seems there's people for who blogs represent a useful medium to earn career credits.

But back to the Seed article: it is interesting for another reason. As we previously discussed, purely software generated measures can be unreliable, as is shown by the example of a whole university's high ranking going back to the number of publications of one of their researchers (who published several hundred papers in a journal of which he also happened to be editor in chief) and the example of how the h-index of a (not even existent) author can be pimped to that of an exceptional scientist. The Seed article takes note of this problem by acknowledging the need of human interpretation of data - a task for the "science meteorologist"
�Even if we erect massive databases filled with information on how scientific work is being used in real time, for the foreseeable future it seems inescapable that humans must provide oversight to derive actionable knowledge from the data. Modern weather forecasting provides an illustrative example: Copious real-time data on world weather patterns is available to anyone with a computer and an internet connection, but the vast majority of us rely on meteorologists to synthesize and analyze it to produce a daily forecast. Moreover, even more raw data and subsequent analysis are necessary to transform information about weather into knowledge about climate and how human activity has influenced it over the course of centuries.

Well-designed computer programs may be able to compile usage data on scientific discourse and publishing to generate real-time maps of scientific activity, but such maps can only inform our decision making, not replace it. A new skill set that makes use of such tools�a kind of �science meteorology��will be necessary to serve as a bridge between the academic and public spheres.�

Granted, they are concerned with measuring the impact of scientific work on policy decisions, but I couldn't help wondering what a science meteorologist would "forecast" from data of individual scientists. This candidate is sunny with scattered papers? Clear and cold with a student chill factor of zero K? Partly cloudy with a 10% chance of tenure?

The Seed article also touches on an issue I previously commented on here:
�The problem with evaluating all [scientists] with one fast and easy evaluation system is centralization and streamlining. The more people use the same system, the more likely it becomes everybody will do the same research with the same methods.�

Also Michael Nielsen recently wrote an excellent post on The Mismeasurement of Science making this point:
�I accept that metrics in some form are inevitable � after all [...] every granting or hiring committee is effectively using a metric every time they make a decision. My argument instead is essentially an argument against homogeneity in the evaluation of science: it�s not the use of metrics I�m objecting to, per se, rather it�s the idea that a relatively small number of metrics may become broadly influential. I shall argue that it�s much better if the system is very diverse, with all sorts of different ways being used to evaluate science.�

(Michael is btw writing a book titled �Reinventing Discovery,� about to be published this year. Something for your reading list.) In the Seed article now one finds a quotation from Johan Bollen, associate professor at Indiana University�s School of Informatics and Computing, who is the brain behind the MESUR project:
�If you have a bunch of different metrics, and they each embody different aspects of scholarly impact, I think that�s a much healthier system.�

We can agree on that. Then Bollen continues:
�People�s true value can be gleaned [...]�

Let's hope the day a scientist's �true value� is defined by a software will never come.

Summary:
  • Efforts are made to measure scientist's skills of communicating research to the public and policy makers. Useful for evaluating success, as defined by the measure, and for providing incentives. -- Good.

  • Measuring success by usage. -- Questionable.

  • Noting that data collection still needs human assessment. -- Good.

  • Diversifying in measures prevents streamlining and is thus welcome or, in other words, if you have to use metrics at least use them smartly. -- Indeed.

  • People's true value can be gleaned... -- Pooh.

  • Michael's book is almost done. -- Yeah!

 
Internet