In many parts of the Scholarly Communications world we have moved long past the milieu that Gene Garfield’s ushered in  thirty years ago with the establishment of ISI and the impact factor . But conservative marketplaces have a long tail , much of the academic world is still firmly tied to the evaluative gold standard of the impact factor , and no one will be persuaded to do anything differently until they are also persuaded that it works . And it will take a long time to persuade people that a new governing metric is available , a platinum standard for distinguishing the highest quality of research over time , because a great deal of data has to be assembled and analysed before that can be accomplished . Yet leading voices in the sector , from funders and governments to researchers and librarians , continue to seek evaluation that includes all the available metrics , that allows places for network usage appraisal as well as the impact of blogs and tweets and posters and demonstrations , and which tries to draw a more holistic picture of the impact of a researcher or a research team .


And while we need tools like this for the benefit of funders and governments in making funding decisions , we are equally desperate to help researchers manage and discriminate amongst the tsunami of research publishing at the present time , and the and the wall of knowledge building up behind it as access to second tier research from a host of countries outside of Europe , the US and Japan  supplements the very considerable research contribution of China , India , Brazil and other counties in recent years . And since we long ago lost sight of a specialist being able to read everything published in his sector in a year , we vitally need the data , and especially the metadata , which will enable us to summarise what we cannot read , apply standards where. We cannot form judgements , help us to derisk the issue of missing something significant and ensure that our literature reviews and our management of prior knowledge does not sink the research ship before we set of on the real voyage of discovery which takes place after the discovery of the known .


And we have been preparing ourselves for this for years and this is clear in commercial market movies over the last few years . When Clarivate Analytics , owners of Web of Science and thus the citation based world we have inherited , bought Publons , I have always imagined , since using the peer review data in this context will be very important . Equally , Elsevier began the whole long development track behind SciVal in order to use vital data from Scopus and elsewhere to begin this immense evaluative task . And Informa bought Colwiz and in order to get into the same boat . As of today, the boat is getting just a bit more crowded . The launch this morning of Dimensions , based on the funding data work of the existing Dimensions site at Digital Science , but now integrating data from six Digital Science companies , is a dramatic step and raises. the bar for all of its competitors . It teaches us something important about bringing data from a variety of sources in a way that gives users a new view with an enhanced utility and it shows how a publisher service like ReadCube  can also be turned round to become a discovery engine in its own right . And with a marketing plan that starts from free access for researcher and moves through institutional fees at different levels of value it really accommodates itself to the prevailing culture around the costs of service provision .


This is a great start from which to build – and there is a long build yet to go . There is both geographical data and altmetrics data . Then there are the evaluative tools that will support research management in its urgent evaluation needs . Impressive as this aggregation is , there is more in front than there is behind . , But Dimensions certainly puts Digital Science in position .  It is the first service development that makes Digital Science a company and not an incubator , andthis may be very important to its next steps . Like Clarivate , Digital Science is not a publisher . It there fore has Clarivate’s vital neutrality from the things that it is rating . Above all , it is a data and analytics company and was created to work in this research management space . A place where many publishers will try to struggle into as journal publishing , the cash cow of the 1990s , becomes a more and more difficult and less predictable marketplace .


Yet all of these players face one sobering difficulty. Building the foremest evaluative research engine is not like other competitive parts of this marketplace . A new standard is just that . Four competing standards help no one . Winner takes all , and after a time the prevailing system becomes supported by other people’s tools , and competition is reduced . This is not to say that there is not good business in providing researchers with good analytical tools , just that it is most likely that the successor to the impact factor will remain a single entity managed for tha market by one of these players – or by one we do not yet see coming , perhaps from the non-profit sector . In the meanwhile , “new “ dimensions should be celebrated , and not least by Holtzbrinck . If they do need to suck debt from Springer Nature post IPO then Digital Science is becoming a very valuable token for doing that and ensuring their majority position . If that does not happen , then they have a an asset with fast growing value in the group .

As some readers at this place already know , the boring fact is that I started work in the publishing and information industry in October 1967 , and am thus over fifty years as an observer of change in these parts . And , in what some regard as a fifty year dotage , , I am prone to remark that change is the new normal etc etc and pour scorn on the wealthy publisher who I approached for work in 1993 and who replied “ tell me when your digital revolution thing is over and then help me to cope with the next five hundred years of the post-printing world “ . And I quite see the point . Revolutions are not for everyone . And there were comfortable years in my twenties when it seemed possible to believe that Longman ad OUP, Nelson and Macmillan , could go on ruling the post colonial world of school textbook publishing  with nothing more exciting than a revised Latin syllabus to stir the waters of their creativity . Yet in truth the world of print , from the rise of Gutenberg to the fall of the house of Murdoch , has been full of change . It just happens faster and more completely now .

In the old world ( my personal calendar divides at 1993 , Year 1 ,when I did my first internet  strategy consultancy job : Appearances to the contrary my age is only 25! ) we bought and sold companies on valuations that reflected something of their ownership of unique , proprietory content . In the deals that I did for Thomson in the early 1980s , and particularly in the building of the then large law database Eurolex , ownership and exclusivity were critical . Journeying to Luxembourg last week , I reflected on my first visit there in 1980 to negotiate the rights to put the judgements of the European Court of Justice online . Reporting triumphantly to my chairman , I recall him saying – “ but surely they are worthless if everyone can get them ? “ Since that day the following earthquakes have taken place : cross-file searching that gave real utility to collections of documents held online ; the Internet and the Web , which permitted exposed content to be treated and searched as if it were all in the same place ; and then the ability to scrape , copy , transmit and , in the age of  SciHub , mass-pirate that allegedly precious content , proprietory or not .


And so we emerge into the Age of data . It took a decade for the content world to understand that the Web was not just a place where you took the formats of yesterday , reloaded them digitally and pursued the same business models . By 2005 much of this had been done , and over the next ten years  we had some really interesting Web formats , many new variant business models , and the first tremors of the new ‘shake . We call this round of shifting and grinding tectonic plates the Data revolution , and you need to look closely at micro movements to see it happening . In an area like science research , always a useful bellwether , the last quarter showed real progress  in terms of the reaction of major players . In landmark announcements in the past three months Springer Nature have indicated that their SciGraph now contains over a billion metadata items  (   ) while Elsevier have cleverly released their Unified Data Model (UDM) to a club of Pharma companies  ( . In short this means that the largest traditional content players in the sector are awakening to two critical factors in the post-content world – the content-about – thee-content will be more important than the content itself , and that your data model will be the most important means you have of communicating with your customers .


This column has many times rehearsed the market moves away from research and towards workflow . We have dwelt here at almost embarrassing length on the device as a solution rather than a primary access point . In the research world we can clearly see the emergence of a tools and services economy , in a market that has moved away from budget restricted purchasing points like the library and towards a total concentration on researcher support . Many publishers would love to go on living in a traditional publishing world – especially in scholarly societies dependent on journal income –  but as Roger Schonfeld has indicated in two recent Scholarly Kitchen articles , it is simply no longer possible . If even Titans like Elsevier and Springer Nature are moving off the floodplain and seeking higher ground , everyone else needs a lifeboat . This is a consolidating market too – acquisition and failure  are increasing , though who would want to buy traditional journals at present ?  Consolidation here means outlets decreasing , more preprints and an increase in informal availability and transmission ( ResearchGate ) .


But the move to a data driven market where metadata searching is routine and text and data mining is a fluent part of most research processes implies that everything is available to be swept . Academic publishing is a paywall world where use of advanced mining techniques has to be negotiated with data holders . And publishers building analytics will find that you need a centrality of  deployment to make them meaningful . As Roger Schonfeld indicates , this implies a partnering spirit that is alien to the capitalist spirit. And Danny Kingsley , director of Scholarly Communications at Cambridge  said in an LII speech at the beginning of this month , the risk is that while the public purse can pay for some initial innovation , these funds cannot be reliably sustained – with the result that companies she named like Elsevier were buying their way into the academic service economy . This obviously worried her more than it does me – people fleeing for the hills cannot be too picky – but it does raise the interesting question of where the value now lies that underpins these players . It is not surely in the copyrights  . It may be in the software . Increasingly it will be in the analytics , but this will be a fast moving game of winner -takes-all. – for a moment . And how many big service companies do you need – I suggest a market leader and a competitor to keep him honest is enough . This is the trouble with earthquakes , you end up sitting somewhere unfamiliar waiting for the aftershocks .


To those who have reached this point , thanks and seasonal greetings . More funeral rites next year , for which I wish you every happiness and success- especially if you are tackling the enigma which is the networked digital society .

« go backkeep looking »