Sitting through the summer months beside a misty inlet on the Nova Scotian coast it is all too easy to lose oneself in the high politics of OA and OER, of the negotiations between a country as large as California and a country as large as Elsevier. Or whether a power like Pearson can withstand a force as large as McGraw with added Cengage. I am in the midst of Churchill’s Marlborough: His Life and Times. There momentous events revolve around a backstairs word at Court. There great armies wheel in the Low Countries as Louis XIV and William of Orange contend for supremacy. Wonderful stuff, but the stiff of history? Nothing about peasants as soldiers, or about harvests and food supplies? Likewise, if we tell the story of the massive changes taking place in the way content is created and intermediated for re-use by scholars and teachers without starting with the foot-soldiers, by which I mean not just researchers and teachers but students and pupils as well, then I think we are in danger of mistaking the momentum as well as the impact of what is happening now. 

When our historians look back, hopefully a little more analytically than Churchill. I think they will be amazed by the slowness of it all. We are now 30 years beyond the Darpanet becoming the Internet. And over 20 of life in a Web-based world. Phone books are an historical curiosity and newspapers in print are about to follow. Business services have been transformed and the way most of us work and communicate and entertain ourselves is firmly digital. Yet nothing has been as conservative and loathe to change as  academic and educational establishments throughout the developed world, and they have maintained their success in imposing these constraints on the rest of the world. From examination systems to pre-publication peer review traditional quality markers have remained in place for the assurance, it is held, of governments, taxpayers and all participants in the process. And while the majority of inert content became digital very early in the the 30 year cycle of digitisation, workflow and process did not. Thus content providers were held in a hiatus. As change took place at the margins, you needed to supply learning systems as well as textbooks (who would have guessed that it would be 2019 before Pearson declared itself Digital First?). And by the same token, who could have imagined that we would be in 2019 before elife’s Reproducible Document Stack feasibly and technically allowed an “article” to contain video, moving graphics, manipulable graphs and evidential datasets?

It is not hard to identify the forces of conservatism that created  this content Cold War, when everyone had to keep things as they had always been, and as a result of which publishing consolidated – and is still consolidating into two or three big players in each sector, it is harder to detect the forces of change that are turning these markets into an arms race. These factors are mostly not to do with the digital revolution, much as commentators like me would like the opposite to be true. Mostly they are to do with the foot soldiers of Marlborough’s armies, those conscripted peasants, those end users. When we look back we shall see that it was the revolt of middle class American parents and their student children against textbook prices, the wish of the Chinese government to get its research recognised globally with out a pay wall, the wish of science researchers to demonstrate outcomes quicker in order to secure reliable forward funding and the wish of all foot soldiers to secure more interoperability of content in the device – dominated, data centric world in to which they have now emerged, that made change happen.

And how do we know that? You need an instrument of great sensitivity to measure change, or maybe change is a reflection of an image in the glass plate of some corporate office. Whatever else is said of them, I hold Elsevier to be a hugely knowledgeable reflection of the markets they serve. So I regard their purchase of Parity Computing as a highly significant move. When publishers and information providers buy their suppliers, not their competitors, it says to me that whatever tech development they are doing in their considerable in-house services, it is neither enough, or fast enough. It says that still more must be done to ensure that their content-as-data is ready for intelligent manipulation. It also says that the developments being created by that supplier are too important, and their investment value too great, to think of sharing them with a competitor using that supplier. 

Markets change when users change. But when the demand for change occurs, we usually have the technology – think of the 20 year migration from Expert systems and Neural Networks to machine learning and AI – to meet that new demand. The push is rarely the other way round. 

Standing in the crowded halls of the Frankfurt Book Fair is as good a place as any to phantasize about the coming world of self-publishing. After detailed discussion about Plan S, or DUL or Open Access books one can easily become waterlogged by the huge, social, political and commercial pressures built up in our methodologies of getting work to readers. In general terms, intermediaries arise where process is so complex that neither originators nor ultimate users can cope with it without facilitation. So publishing in Europe was a refinement of the eighteenth century role of booksellers, adding selection and financing to the existing self-publishing model of booksellers. In the next two centuries the new business model became so entrenched – and, for some, so profitable, that their successors behaved as if it was ordained by God and nature, and would live for ever. Intermediation will indeed probably be required as far as we can predict, but it is certain to change, and it is not certain to include all of the roles that publishers cherish most deeply.

Two episodes this week re-inforce some of these feelings. In one instance, the scholarly market software company Redlink (https://redlink.com/university-of-california-press-adds-remarq/) announces an agreement to supply its software toolset to the University of California Press. Nothing unusual here, but something symptomatic. More and more publishers using clever tools to heighten value and increase discoverability. But those software tools are becoming more and more “democratic” – they can be used in good machine learning contexts to help to generate more technical skill at different points in the network, both before and after the “publishing process”. In other words, the more it becomes clear to, say, a funder or a research unit or a university that the divine mystery of publishing devolves to a set of software protocols, the more likely it is, given that publishers cannot control digital dissemination, that the control point for content release will migrate elsewhere. In a previous note I mentioned UNSILO’s manuscript evaluation system with very much the same thought in mind – while the pressure is on traditional publishing to arm themselves with the new intelligent tools for competitive purposes as well as to increase speed and reduce cost, these tools also contain the seeds of a transition to a place where research teams, institutions and finders can do the publishing bit effectively for themselves. So the question left on the table is – what other parts of the processes of scholarly communication are left requiring  intermediary support?

And so the struggle now is to look at the those other parts of the scholarly research and communications process that are hard to gather and bring into focus and analysis. It was interesting in this light to reflect that Web of Science Group and Digital Science are already well down this track. Gathering together peer review and making sense of it (Publons) is the sort of thing that only an outside agency can do effectively, just as collecting and analysing posters (Morrissier) will release layers of value previously unrecognised. And while many bemoan the abject failures in optimizing research funding through effective dissemination and impact of the results, only Kudos have really grasped the nettle and begun to build effective dissemination planning processes. But how can these interventions be put together and scaled? And how can we ensure the dataflows are unpolluted by self-promotion or lack of verification and validation?

Some of these questions cannot be resolved now, but do we know that we are at least moving in a self-publishing direction? Well, Gates Foundation and Wellcome – and perhaps Horizon 2020 seem to think so, even if they use intermediaries to help them. Researchers and academics are substantially self publishers already, producing posters, blogs, tweets, annotations, videos, evidential data, letters and articles online with little assistance. And it was interesting to see the Bowker report of last week which indicated a 38% growth in self-publishing last year to over a million new books in both print and e-publishing, though ebooks are doing far less impressively. And then:

“Since 2012, the number of ISBNs assigned to self-published titles has grown 156 percent. “

http://www.bowker.com/news/2018/New-Record-More-than-1-Million-Books-Self-Published-in-2017.html 

Of course, this may just reflect consumer trends, but such trends alter attitudes to the possible in other sectors. Certainly the economic impossibility of the academic monograph in many fields will be affected by the growth of a library crowd funding model (Knowledge Unlatched) and this will extend to departmental and institutional publishing in time.

So I left my 51st Frankfurt in buoyant mood, thinking of the day when it is renamed the the Frankfurt Publishing Software and Solutions Fair, held on one floor, and I can once again get into the bar of the Hessicher Hof with a half decent chance of getting a seat – and a drink!

And then, as I was finishing, came this https://osc.universityofcalifornia.edu/2018/10/open-source-for-open-access-the-editoria-story-so-far/

Open Source for Open Access: The Editoria Story So Far

keep looking »