In a recent online conversation , one of the most distinguished publishers of our time noted that “ STM publishers are , at least for the time being , more successful than general publishers in transitioning their business away from reader pays to creator pays ( which strangely is seen as morally superior )”. And it is quite true that there is a moral overtone to the notion that research funded from the public purse , and conducted by people paid from the public purse , should be generally available to the public . But this is not really the point anymore . The issue has grown larger , and recent events and a 21st Birthday highlight a broader emerging question ; if STM publishers were successful in going Open Access , and supporting a creator-pays business model , how will they cope with the next migration , if that is towards Open Platform , and funder pays in a context that does not really seem to require publishers in quite the same way at all . 

But first of all , birthday greetings . Vitek Tracz , consistent presiding genius at the business of forecasting change in scholarly communications , launched F1000 21 years ago , and despite the sneers ( and the big players have always sneered before they  bought his creations ), it is alive and well in the safe hands of T&F and the management of Rebecca Lawrence and her team . They have piloted it through ORC, the open publishing platform for major funders like Wellcome , Gates , HHMI and AMRO (  ). into becoming ORE , the open platform selected by the European Commission to publish research outputs from its funded Horizon 2020 research programmes . The lure for researchers is simple processing , the author gets to publish it the way they want it published , low charges ( F1000 is $1350 , ORE is free) and post-publication and ongoing open peer review . Having asked whether this will catch on for 21 years , I am still in the dark , but one thing is becoming reasonably clear . The rumbling discontent with current peer review , with retractions and with reproducibility create a much better climate for the acceptance of these things than anything we have ever seen before . And as cOAIitionS rolls forward into full application in the next two years the atmosphere will improve . 

One signal which may encourage the birthday celebrant was the announcement made on 6 August that the UK Funding Body , UKRI , in association with the Jisc , had made a grant to a project called Octopus , which they proclaim as “ a platform which will change research culture “ . The funding is pitiful – a mere £650,000 over three years – but this is clearly a scoping exercise, and the intent is more interesting than the amount . Dr Alexandra Freeman , whose notion this is , won a Royal Society Pitch award and her presentation  ( gives interesting evidence of her thinking , where platforms are language agnostic ( simultaneous translation ), post -publication peer reviewed , and where reviews themselves are treated as ancillary publications . But the really interesting part of the proposal is the break-up of the article itself . Dr Freeman sees it as dividing into eight different segments , each of them appearing on the platform as soon as they are ready , and thus each element being susceptible to review at that point . Her eight sections are :  Problem ; Hypothesis; Methodology/Protocol ; Data/Results ;Analysis; Interpretation : Real-world Implications ; Peer Review. It will be seen that the thinking leans towards the Open Science insistence in separating the publication of the first three elements in time prior to results being available . It also encompasses another strand of funder thinking – all the work that has been accepted and funded , through increasingly expensive selection processes , should subsequently appear on a platform and be peer-reviewed. The process of publisher/editor selection may not now be wanted on board . 

And there will be other casualties if the Open Platform replaces the current OA models . Plenty of talk of scoring and altmetrics – no mention of journal impact factors . Open Platforms will use ORCHID identifiers . They will emphasise speed of communication , simplicity of use ( Vitek Tracz may be pleased that he accidentally failed to sell the support software Sciwheel when he sold F1000 – workflow tools for Open Platforms may be the most valuable part ). It was interesting to see that amongst the prominent supporters of the award to Dr Freeman was the UK Reproducibility Network ( . The fact that there is such a network of UK universities and researchers and that it has pronounced and interesting views on how things should be published is a clear sign of a change of mood . When the history is written , the shift from subscription publishing to Open Access will be seen as the small conditioning change that paved the way to a complete revolution in the way in which science is communicated . 

We have been over this ground before , you may be thinking. Isn’t this the one where I say there are now more readers in machine intelligence than in bone , blood and tissue . And that these machine readers communicate effectively with each other , perform advanced analysis , and are increasingly able to perform as writers of at least parts of articles as well as readers. So traditional human roles in scholarly communications – like reading all the pertinent articles , or doing the literature review , or managing the citations – can be increasingly automated . Yes , it is that article again , but with a new twist , courtesy of an interview conducted by Frontiers with Professor Barend Mons , who holds the chair in BioSemantics at the University of Leiden .Going behind the interview and picking up some of his references quickly showed me how facile I was being when I first described here the trends that I was seeing . If we follow Dr Mons , then we turn scholarly publishing on its head – almost literally . 

Let’s start where all publishing begins . Scholarly communications reflects the way in which humans behave . They tell stories . The research article is a structured narrative . As publishers we have always known that narrative was the bedrock of communication . So the issues that Dr Mons broaches are startling , obvious and critical . Narrative is not the language of machines . Data is the language of machines . In order for our machines to understand what we are talking about we have to explain what we mean . So we turn content into data and then add explanations , pointers and guidelines in terms of metadata . And even then we struggle , because we still see the research article as the primary output of the research process . But as far as the machine-as-reader is concerned this is not true . The primary output of the research process is the evidential data itself and what emerges from it . The research article , in a machine-driven world , is a secondary artefact , a narrative explanation for humans , and one which needs far more attention than it currently gets if it is ever to be of real use to machines . 

So we are in a mess . The availability of data is poor and inconsistent ( my words ) . Mons points to the speed of theoretical change in science – knowledge is now longer dominated for a generation by a scholar and his disciples ( he quotes Max Planck to the effect that in his day science progressed funeral by funeral ). The data is not prepared consistently and ( again , my words ) publishers are doing little to coordinate data availability , linkage , and preparation . They do not even see it as their job . Dr Mons , as an expert in the field ( he was one of the chief proponents of the FAIR principles and is regarded as a leading Open Science advocate. He is also the elected president of CODATA , the research data arm of the International Science Council ), plainly sees as urgent the need to enrich metadata and enable more effective machine-based communication . When I look back on the past decade of entity extraction , synonym definition, taxonomy creation  and ontology development I find it dispiriting that we are not further on than we are . But then Dr Mons directed his listeners towards BioPortal ( ). This helps to visualise the problem . 896 ontologies in biomedicine alone , creating 13,315,989 classes . Only the machine can now map evidence produced under different ontologies at scale , and by Dr Mons account it needs standardisation , precision and greater detail to do so . 

If the people we call publishers today are going to take the challenge seriously of migrating from Journals and monographs to becoming the support services element of the scholarly communications infrastructure , then the challenge begins here . We need a business model for enhancing , standardising and co-ordinating data availability . This is not about repositories , storage or Cloud , necessarily – but it is all about discoverable availability , data cataloguing and establishing improved standards for metadata application . And there is an implied challenge here . Dr Mons nods towards the role of advanced data analysis in helping the discovery of Covid 19 vaccines . But the task he describes is not one of using what we know to track more dangerous and contagious variants . He sees the challenge as the requirement to use our data and analytical powers to profile all of the feasible variants which could possibly become a threat, and developing our vaccines to the point where they met those threats before they arose . If our data is not re-unable and is unfit for purpose we do not get there . The depth of the challenge could not be clearer . 

Some participants clearly see a way forward . The  recent announcement of a strategic relationship between Digital Science and Ontochem is an encouraging example . But most of us for the most part are still article-centric and not data-driven  . We urgently need business models for data-enhancement , to do what we should have been doing over the past decade . 

« go backkeep looking »