This is the third attempt in a week to try to record the thinking that initiated the first one and pervaded the second . So here goes : ” Science is based upon measurement and evaluation , yet the activities of scientists have themselves been less measured and evaluated than the subjects of their research . ” In a society that now seeks the ROI of everything , this is becoming an important consideration . In the past we have been happy to measure secondary effects , like the shadow on the wall called “impact factor ” , when it came to  measuring and evaluating  ” good science ” . Now we can see clearly who is reading what and the way they rate it ( look only at Mendeley and ReadCube) , and what they say about it on Twitter and the blogs , we have a much broader measure of what is influential and effective . The market leaders in measurement a decade ago were Thomson (ISI ) , using the outstanding heritage of Eugene Garfield . They were followed by Elsevier , who today, by way of Scopus and its offshoots ,  probably match them in some ways and exceed them in others . Today , these players find themselves in a very competitive market space , and one in which pressure is mounting . Science will be deluged by data unless someone can signpost high quality quickly , and use it in filters to protect users from the unnecessary , while enabling everything to stay available to allow some people to search totality .

I started to get interested in this last year , when the word “alt-metrics ” first showed up . A PLoS blog by Jan Leloup in November 2011 asked for data :

“We seek high quality submissions that advance the understanding of the efficacy of altmetrics, addressing research areas including:

So a wide range of new measuring points is required alongside new techniques for evaluating data about measurement gathered from a very wide variety of sources. And what is “altmetrics ” ? Simply the growing business of using social media data collection as a new evaluation point in order to triangulate measurements that point to the relative importance of various scientific inputs . Here the founders make the point at www.altmetrics.org:

“altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.

Our vision is summarized in:

J. Priem, D. Taraborelli, P. Groth, C. Neylon (2010), Altmetrics: A manifesto, (v.1.0), 26 October 2010. http://altmetrics.org/manifesto
These scholars plainly see as well that it is not just the article that needs to be measured and evaluated , but the whole chain of scholarly communication , and indicate particular pressure points where the traditional article needs to be supported by other publishing types in the research communication cycle :

“Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse. Articles are increasingly joined by:

  • The sharing of “raw science” like datasets, code, and experimental designs
  • Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
  • Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.

Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself. ”

So a new science of measurement and evaluation is being born , and , as it emerges , others begin to see ways of commercialising it . And rightly so , since without some competition here progress will be slow . The leader at present is a young London start-up called , wisely , Altmetric . It has created an algorithm, encased it in a brightly coloured “doughnut” with at-a-glance scoring, and its first implementation is on PLoS articles . I almost hesitate to write that it is a recent investment of Macmillan Global Science and Education’s Digital Science subsidiary , since they seem to crop up so often in these pages . But it is also certainly true that if this observant management has noted the trend then others have as well . Watch out for a crop of start-ups here , and the rapid evolution of new algorithms .

Which really brings me back to the conclusion already written in my previous pieces but not fully drawn out . Measurement and Evaluation – M&E – is the content layer above metadata in our content stack . It has the potential to stop us from drowning in our own productivity .  It will have high value in every content vertical , not just in science . Readers will increasingly expect signposts and scores so that they do not waste their time . And more importantly than anything else, those who buy data in any form will need to predict the return that they are likely to get in order to buy with security . They will not get their buying decisions right , of course , but the altmetrics will enable them to defend themselves to budget officers , taxpayers , and you and I wen we complain that so much funding is wasted !

 

It was a week. In the corridors of power, media tycoons planned post-imperial escape routes. And we who were content to play in a corner with, in the Yeatsian line, a looking glass and some beads, found revealed wonders in the very simplest of things. So Rupert Murdoch did a McGrawHill and divided his imperium into Good bank/Bad bank, and the latter got all the stricken print, from the Times to Harper Collins. The image which stuck with me, with the hacking debacle  somewhat in mind, was the US exit from Saigon. I tweeted that I could hear the helicopter’s whirling rotors above the embassy roof. The tycoon’s change of heart displayed just that sense of panic – “…OK , lets burn the papers and go …!” – leaving in the air the question of who can be persuaded to invest in the Bad bank, and at what price?

Back down at street level, two very encouraging developments took place in educational activities that I have been tracking for a very long time. While Mr Murdoch bundled Joel Klein’s educational division into the Bad bank category, I think we are pulling back round towards a very clear and obvious progressional framework for new service development. At the beginning of the month, in “After the Textbook is over” I tried to track the way in which narrative, especially in video base format, will change our approach – or, rather, allow it to revert to the ways in which learners have always learnt. And I looked at buy and build strategies aimed at creating real weight in the serious educational gaming markets. So now, at the end of the month, let me add two more elements to the mix. The platforms on which the new learning will be presented will be mobile, tablet and post-tablet, and their mobility will need support for teachers as narrative creators, learning journey planners, learning games implementors. The resources and assessments are even now being developed. And it will be critical to the success of all of this that teachers at all levels support each other, that successful learning journies can be adopted, amended and replicated, and that the behavioural tracking which we can do so much more effectively in these digital contexts is re-applied all the time to help get these environments grow responsively. (It remains an interesting question: why do we aspire so strongly to apply behavioural feedback to target advertising, yet use it so relatively sparingly to improve interfaces, online interactions, and, above all, the learning experience!).

Three news stories this week illustrate these matters for me. In the first instance I was very taken by the news (and it has been a long wait) that Global Grid for Learning (www.ggflondemand.com) is now an accredited part of the Microsoft Education Suite. Global Grid for Learning was developed by Cambridge University Press as the neutral storehouse and trusted broker for copyright-cleared information, allowing a teacher-facing aggregation of learning objects with good metadata connectivity to act as a quarry for lesson planning and narrative assembly. The service is now owned by EduTone in California, though why on earth Cambridge sold it last year when it was so  close to success still beats me. The fact that it is now the supply point in the Microsoft Education Suite in the very week when Microsoft announced its entry into the tablet market via its Surface strategy speaks volumes for the importance of this type of work (and also perhaps says something sad about the inability of some ancient University presses to change gear – Cambridge has now effectively left its domestic education market and removed its bridge to global markets).

But not all teachers will plan lessons or make journies or write narratives. Many or even most will borrow, imitate or adapt. This means that good practise has to be available and exposed, and that teachers have to respond to it. So it was hugely encouraging this week to read the announcement  from the American Federation of Teachers that their Share my Lesson site will become available in August. This is as a result of their collaboration with the UK’s TSL Education (the owner of the Times Educational Supplement, another Rupert Murdoch company dumped on the road to the Bad bank, but doing very well in private equity hands). TSL Education created TES Connect in 2008 as a way of creating a sharing environment for British teachers. The service currently has some 2 million members in 197 countries  and they download about 2.5 million lessons per month (http://www.tsleducation.com/). So gradually a global architecture moves into place to fuel the resource provision requirements of an education world which now has the other infrastructure environments it needs (networks, VLEs and LMS technology for storing, serving, collating results and communicating with parents, employers etc.). It is often said, and I have sometimes said it, that the system is now dominated by assessment: in truth, we are moving not just towards continuous assessment, but to the point where every learner knows when they have learnt something, and so does the systems around them. In order to make that vision sustainable we have to up the quality of the game in terms of the learning journey itself, and no one is doing more for that than TLS Education. The US is currently, in the bundle of 34 measured countries taking part in the Program for International Student Assessment (Associated Press 19 June 2012), 14th in reading, 17th in Science, and 25th in Mathematics. Whatever this means, it also means that  teething pains in re-engineering the teaching workforce should not be a deterrent when there is so much opportunity for improvement.

And the last story? It has nothing formally to do with educational, but it certainly demonstrates the feedback loop I was talking about earlier. Thomson Reuters launched its MarketPsych Indices (TRMIs) last week “in order to give real time psychological analysis of news and social media”. Here we are trying to “model the impact of investor psychology” and eventually develop “under the radar investment hypotheses”. In The whole field of learning more about what we know through analysis of how we talk about it, we are still in the nursery. Apply this in education and our learning journies may emerge in an interesting new light!

« go backkeep looking »