A sudden thought. Doing an interview with some consultants yesterday (we are fast approaching the season when some major STM assets will come back into the marketplace) I was asked where I had estimated Open Access would be now when I had advised the House of Commons Science and Technology Committee back in 2007 on the likely penetration of this form of article publishing. Around 25%, I answered. Well, responded the gleeful young PhD student on the end of the telephone, our researches show it to be between 5-7%. Now, I am not afraid of being wrong (like most forecasters, I have plenty of experience of it!). But it is good to know why and I suspect that I have been writing about those reasons for the last two years. Open Access, defined around the historic debate twixt Green and Gold, when Quixote Harnad tilted at publishers waving their arms like windmills, is most definitely over. Open is not, if by that we begin to define what we mean by Open Data, or indeed Open Science. But Open Access is now open access.

In part this reflects the changing role of the Article. Once the place of publisher solace as the importance of low impact journals declined, it is now the vital source of the things that make science tick – metadata, data, abstracting, cross-referencing, citation, and the rest. It is now in danger of becoming the rapid act at the beginning of the process which initiates the absorption of new findings into the body of science. Indeed some scientists (Signalling Gateway provided examples years ago) prefer simply to have their findings cited – or release their data for scrutiny by their colleagues. Dr Donald Cooper of the University of Colorado, Boulder, used F1000Research to publish a summary of data collected in a study that investigated the effect of ion channels on reward behavior in mice .In response to public referee comments he emphasized that he published his data set in F1000Research “to quickly share some of our ongoing behavioral data sets in order to encourage collaboration with others in the field”. (http://f1000.com/resources/Open-Science-Announcement.pdf)

I have already indicated how important I think post-publication peer review will be in all of this. So let me now propose a four-stage Open Science “publication process” for your consideration:

1. Research team assembles the paper, using Endnote or another process tool of choice, but working in XML. They then make this available on the research programme or university repository, alongside the evidential data derived from the work.

2. They then submit it to F1000 or one of its nascent competitors for peer review at a fee of $1000. This review, over a period defined by them, will throw up queries, even corrections and edits, as well as opinion rating the worth of the work as a contribution to science.

3. Depending upon the worth of the work, it will be submitted/selected for inclusion in Nature, Cell, Science or one of the top flight branded journals. These will form an Athenaeum of top science, and continue to confer all of the career-enhancing prestige that they do today. There will be no other journals.

4. However, the people we used to call publishers and the academics we used to call their reviewers will continue to collect articles from open sources for inclusion in their database collections. Here they will do entity extraction and other semantic analysis to make what they will claim as the classic environments which each specialist researcher needs to have online, while providing search tools to enable users to search here, or here plus all of the linked data available on the repositories where the original article was published – or search here, on the data, and on all other articles plus data that have been post-publication reviewed anywhere. They will become the Masters of Metadata, or they will become extinct. This is where, I feel, the entity or knowledge stores that I described recently at Wiley are headed. This is where old-style publishing gets embedded into the workflow of science.

So here is a model for Open Science that removes copyright in favour of CC licenses, gives scope for “publishers” to move upstream in the value chain, and to increasingly compete in the data and enhanced workflow environments where their end-users now live. The collaboration and investment announced two months ago between Nature and Frontiers (www.frontiersin.org), the very fast growing Swiss open access publisher seems to me to offer clues about the collaborative nature of this future. And Macmillan Digital Science’s deal on data with SciBite is another collaborative environment heading in this direction. And in all truth, we are all now surrounded by experimentation and the tools to create more. TEMIS, the French data analytics practice, has an established base in STM (interestingly their US competitor, AlchemyAPI, seems to work most in press and PR analysis). But if you need evidence of what is happening here, then go to www.programmableweb.com and look at the listings of science research APIs. A new one this month is BioMortar API “standardized packages of genetic patterns encoded to generate disparate biological functions”. We are at the edge of my knowledge here, but I bet this is a metadata game. Or ScholarlyIQ, a package to help publishers and librarians sort out what their COUNTER stats mean (endorsed by AIP), or ReegleTagging API, designed for the auto-tagging of clean energy research, or, indeed, OpenScience API, Nature Publishing’s own open access point to searching its own data.

And one thing I forgot. Some decades ago, I was privileged to watch one of the great STM publishers of this or any age, Dr Ivan Klimes, as he constructed Rapid Communications of Oxford. Then our theme was speed. In a world where conventional article publishing could take two years, by using a revolutionary technology called fax to work with remote reviewers, he could do it in four months. Dr Sam Gandy, an Alzheimer’s researcher, is quoted by F1000 as saying that his paper was published in 32 hours, and they point out that 35% of their articles take less than 4 days from submission to publication. As I prepare to stop writing this and press “publish” to instantly release it, I cannot fail to note that immediacy may be just as important as anything else for some researchers – and their readers.


Comments

Name (required)

Email (required)

Website

Speak your mind

 

5 Comments so far

  1. Shane O'Neill on April 30, 2013 19:14

    One could almost say re 20 years of the Open Access debate, and with reference to Dr Cooper’s valuable research quoted above:

    Parturient montes, nascetur ridiculus mus.

  2. Stevan Harnad on May 1, 2013 07:35

    Umm, the current total annual Global OA (Green + Gold: mostly Green) is now around 35%. A growing set of Green OA mandates from funders and institutions in the UK, US & EU is gathering to accelerate OA growth still more. See ROARMAP.

    Mus quod rugiebat

  3. dworlock on May 1, 2013 11:41

    My argument was that needs and technology have moved on – making the Open Access debate into a different argument . The international research team who I quoted were looking at the whole global market , and certainly looking wider than ROARMAP . Still , they could be wrong . I recall that 20 years ago we never really resolved whether the global market meant 7000, 12,000 , 16,000 , or 56,000 journals . Measurements in this sector seem to reflect the market share percentages that the combatants want – the market definition adopted is the one which will prove the point !

    veritas vos liberabit

  4. Convenience versus Community — Is a Deeper Question Hiding Behind the Façade of the Access Debates? | The Scholarly Kitchen on May 2, 2013 10:30

    […] Worlock has been thinking along similar lines, I discovered after writing the bulk of this post. He thinks the debate has shifted: Open Access, defined around the historic debate twixt Green and Gold , when Quixote Harnad tilted […]

  5. Infobib on June 4, 2013 15:12

    […] David Worlock fragt: Is Open Access over? […]