This is the third attempt in a week to try to record the thinking that initiated the first one and pervaded the second . So here goes : ” Science is based upon measurement and evaluation , yet the activities of scientists have themselves been less measured and evaluated than the subjects of their research . ” In a society that now seeks the ROI of everything , this is becoming an important consideration . In the past we have been happy to measure secondary effects , like the shadow on the wall called “impact factor ” , when it came to measuring and evaluating ” good science ” . Now we can see clearly who is reading what and the way they rate it ( look only at Mendeley and ReadCube) , and what they say about it on Twitter and the blogs , we have a much broader measure of what is influential and effective . The market leaders in measurement a decade ago were Thomson (ISI ) , using the outstanding heritage of Eugene Garfield . They were followed by Elsevier , who today, by way of Scopus and its offshoots , probably match them in some ways and exceed them in others . Today , these players find themselves in a very competitive market space , and one in which pressure is mounting . Science will be deluged by data unless someone can signpost high quality quickly , and use it in filters to protect users from the unnecessary , while enabling everything to stay available to allow some people to search totality .
I started to get interested in this last year , when the word “alt-metrics ” first showed up . A PLoS blog by Jan Leloup in November 2011 asked for data :
“We seek high quality submissions that advance the understanding of the efficacy of altmetrics, addressing research areas including:
- Validated new metrics based on social media.
- Tracking science communication on the Web.
- Relation between traditional metrics and altmetrics including validation and correlation.
- The relationship between peer review and altmetrics.
- Evaluated tools for gathering, analyzing, or disseminating altmetrics “
So a wide range of new measuring points is required alongside new techniques for evaluating data about measurement gathered from a very wide variety of sources. And what is “altmetrics ” ? Simply the growing business of using social media data collection as a new evaluation point in order to triangulate measurements that point to the relative importance of various scientific inputs . Here the founders make the point at www.altmetrics.org:
“altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.
Our vision is summarized in:
“Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse. Articles are increasingly joined by:
- The sharing of “raw science” like datasets, code, and experimental designs
- Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
- Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.
Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself. ”
So a new science of measurement and evaluation is being born , and , as it emerges , others begin to see ways of commercialising it . And rightly so , since without some competition here progress will be slow . The leader at present is a young London start-up called , wisely , Altmetric . It has created an algorithm, encased it in a brightly coloured “doughnut” with at-a-glance scoring, and its first implementation is on PLoS articles . I almost hesitate to write that it is a recent investment of Macmillan Global Science and Education’s Digital Science subsidiary , since they seem to crop up so often in these pages . But it is also certainly true that if this observant management has noted the trend then others have as well . Watch out for a crop of start-ups here , and the rapid evolution of new algorithms .
Which really brings me back to the conclusion already written in my previous pieces but not fully drawn out . Measurement and Evaluation – M&E – is the content layer above metadata in our content stack . It has the potential to stop us from drowning in our own productivity . It will have high value in every content vertical , not just in science . Readers will increasingly expect signposts and scores so that they do not waste their time . And more importantly than anything else, those who buy data in any form will need to predict the return that they are likely to get in order to buy with security . They will not get their buying decisions right , of course , but the altmetrics will enable them to defend themselves to budget officers , taxpayers , and you and I wen we complain that so much funding is wasted !