Amongst the many things speeded up by the COVID pandemic , the advance of Open Science could be the most marked . While the grateful world rejoices in the speed with which vaccines were produced , the unprecedented sharing of knowledge , techniques and data between the major labs – in Berkeley,   Broad ( MIT and Harvard ) , Oxford and elsewhere – was a major element , alongside the setting aside of normal competitive feelings between research teams . This enabled roll-out within a year rather than the usual vaccine cycle of three to five years . Add the fact that wholly new science was being deployed – the use of CRISPR gene editing with messenger RNA  – and we are left with a remarkable conclusion : in a collaborative environment and under Open Science protocols , things can go faster and become effective sooner than we had ever imagined . 

With that in mind it is worth considering the role of publishing in all of this . Whenever I become too strident in talking to publishers in the science research communications sector about the changing role of journals , and the incoming marketplace of data and analysis, I usually get a fusillade of questions back about the opportunities in data and the claim that significant flags have already been planted in that map . And they are right , though they often ignore the issues raised by more and more targeted and specific intelligent analysis . And they also ignore the fact that outside of eLife and , to an extent , PLoS , no one of scale and weight  in the commercial publishing sector has really climbed aboard the Open Science movement with a recognition of the sort of data and communication control that Open Science will require . 

So what is that requirement ? In two words – Replicability and Retraction . While we still live in a world where the majority of evidential data is not available with the research article , and is not always obviously linked to data in institutional repositories , it is hard to imagine moving forward from the position reported by Bayer’s researchers – that only 25% of research that they want to use is capable of being reproduced in their laboratories . Other studies have shown even lower figures . What does “peer review” actually mean if it produces this sort of result ? Yet publishers have for years disdained publishing articles that “merely” reproduced existing results and validated previous claims . A publisher interested in Open Science would open up publishing channels specific to reproducibility , link successful and unsuccessful attempts to the original article and encourage others to do the same , while building collective data on replication work for analysis – including the analysis of widely cited papers which cannot be reproduced outside of the ambit of the originating team . 

Open Science advocates would go further and push for the pre-registration of research methodology , peer reviewing the submission of the research plan and publishing it . This would prevent a subtle twist in the reporting that would allow the aims to be slightly adjusted subsequently to fit the evidence actually collected . To my knowledge , and I hope I am wrong , only PLoS have a facility for this at present .Searching and analysis of preregistration data could be immensely useful to science , just as the activity itself could add greater certainty to scientific outcomes . In particular it might lead to less retractions , and it is this area that publishers can again make a huge contribution to Open Science . Retraction Watch and the US charitable foundations that support the two principals there do a brilliant job . Since 2010 they have reported on 20,000 retractions of journal articles to 2019 , but the issue is getting greater and greater , and the number of retractions between October  2019 and  September 2020 rose by another 4064. The fact that researchers are reporting and recording this data wherever they can find it is admirable , but surely publishing should be doing its own house keeping , and collecting and referencing  this data to a central registry . There have to be analytics , in an Open Science environment , which point to the effectiveness of peer review , and if peer review is as important as publishers claim , then protecting its standards should be a critical concern . Along with another Open Science mandate , the publishing of signed peer review reports alongside articles , this constant monitoring of retractions is vital if researchers are not to be misled . This is not about fraud , but about over ambitious and unjustified claims . Publishers should not try to hide the number of retractions they have made , but use the open display of the results to demonstrate how effectively they work in the vast majority of cases. 

The last element here is time . Publishers can use data and analytics far more effectively to track article lifetimes , show that previously diminished work in its first five years can come back into importance in the second , show how retractions issued late in the life of the article can effect other work which cited it or was built around it . By the time we reach 2025 the data around the article life cycle will be far more important than much of the data in all but the most important research articles . 


Comments

Name (required)

Email (required)

Website

Speak your mind