This week is Frankfurt, and thus the pleasure of interviewing Annette Thomas, Macmillan CEO on the STM conference agenda, traditional forerunner of the Frankfurt Book Fair. And I find a hint of nostalgia in the conference programme which precedes our event. It has a traditional flavour. For whenever STM publishers sit down to discuss the twin evils of Open Access and Peer Review (or those who slight it) they do so with a lip-smacking relish which is more akin to tucking into Christmas turkey than a logical discussion of the issues facing scholarly communication. Indeed I sometimes wonder if “science publishing” has gone off on its own, leaving “scholarly communication” to the scholars.

Let me try to illustrate what I mean. The looming crisis in STM, in my warped view, is the data crisis. In every other sector it is rapidly becoming clear that increasingly sophisticated data mining and extraction techniques will come into play as users seek to extract new meaning from existing files, and further discovery as they cross search those files with currently unstructured content held elsewhere. STM, it seems to me, is peculiarly susceptible to this Big Data syndrome, for behind the proprietory content stores of perfectly preserved published research articles “owned” by publishers lies the terra incognito of research data and findings held in labs and on research networks. Future scholars will want to search everything together, and will be impatient with barriers which prevent this. Once the tools and utilities which comprise research workflow become generally available and the techniques and value of semantic searching locks into this, the urge becomes irresistible, and scholarly article data gets versioned, commoditized, “outed “. It does not really matter if it is located on the open web, the closed web, or in the cloud or in a university repository.

The implications of this are vast. Scholars want to be published by prestigious branded journals as a way of being noted: they also want to be searched in the bloodstream of science. They will make sure they are everywhere, and that their data is where it needs to be as well. The metadata may note that this article was Gold OA and that one was published by Science, but this may be of most interest to the filtering interface in the workflow environment, which uses the information to rank or value results. And there is a finding from 25 years ago which continues to haunt me in STM, which alleges that most searches are performed not to find claims or results, but to discover, check and compare experimental methodologies and techniques. In a world where regulation and compliance grew ever more powerful, this is unlikely to diminish.

So I have come to feel that Open Access (one participant asked me what market share it would eventually have, and was appalled when I said 15% – before it becomes wholly irrelevant) and Peer Review (increasingly all research validation exercises will be multi-metric, so even the traditional argument collapses) are more about the preservation of publishers than the future of scholarly communication. Not that I object to that preservation, but I really did sit up as Annette Thomas, in her interview, began to describe some of the game changing activity that Digital Science, child of Nature, is doing as an investor in a variety of workflow-enhancing technologies built by bench researchers for themselves (http://digital-science.com/products).

And in particular the announcement, made during the session, that Labtiva, a Digital Science investment at Harvard (sited in Dogpatch Labs) was launching ReadCube as an App (http://www.readCube.com). If anything bespeaks workflow then it is the App. And what does this one do? It allows researchers to order their current world of articles as a personal content library, free and Cloud-based, with features like a filing system for PDFs, fast download from a university or institutional login, the ability to save and re-read annotations, cite and create references and a personalised recommendation services. In other words, a smart App, worthy of the world of iPad, which solves the distressing everyday issues of finding what you once downloaded and recalling what you once thought about it, and finding more of the same. What could be more simple? But in simplicity like this there is a form of beauty. An App is definable as a workload tool which takes clumsy pieces of multi-stage routine out daily interactions with work – and makes sure you do not have to remember next time the cumbersome process you had to perform to do that.

So, whatever the introspective mood in the room, here is one publisher setting off on the migration to new values, determinedly seeking the pain points in the researchers’ working life and seeking to solve them. And indeed, other publishers (including Elsevier with their SciVerse and SciVal developments) are heading in the same direction. Yet the contrast between this and the generality of players in the sector is profound. At one point in the meeting I found myself in a discussion about what was going right with STM in a difficult marketplace dependent on government finance. Well, said one very knowledgeable source, we are doing a great deal with eBooks, selling them into places we never thought we would reach. Enhanced with video or audio? No, just reversioning of text. And library subscriptions are holding up really quite well, said another, and the market seems to have been able to absorb some limited price increases. And so I took away a picture of a sector holding its breath and hoping that things would revert to normal, and traditional business models would prevail. But we all knew in our hearts that when “normal” came back it would be different. Postponing the trek down the road to Dogpatch Labs only loses first mover advantage, the experience born of re-iteration, and ensures that it will be more difficult to change successfully in the long term.

In the last weeks and months I have written so much about data businesses, workflow strategies, data and software acquisitions and how major players are being reborn in the heat of all this that I should have expected the criticism. When it came, I was shocked. Me, losing sight of the big picture? After all those years of consultancy when clients told me that the big picture was all I had, and the operational reasons why the big picture was unlikely were beyond me? OK, now here is an unashamedly big picture piece.

In the big picture we can see the battalions of information services companies, having emerged from the publishing stage of their development, developing strategies around data – either as Big Data, mining and extraction players, or as workflow and process emulation players. These are all businesses driven by understanding how users work in a networked society, and they are all about the way in which content and software interact to create solutions for the bench researcher, the equities trading risk manager, the teacher and the learner, the patent attorney and his office, or the insurance risk assessor. And many others. And then, through longer workflows, solutioning at the job level begins to turn into solutioning at the industry level. Users, through shared APIs, create their own answers, and these become generalized and re-iterated by the information service vendors, and over time smaller competitors are excluded. This becomes a rich man’s game, and duopolies become the norm, as they already are in some verticals, and then duopolies give way to quasi-monopolies and invite regulatory attention (as they already are in some verticals). Competing with these giants is difficult and market entry based on re-originating workflow approaches built on the experience of countless users will be seen as difficult and pointless. So competition authorities will settle for price/margin controls and by restricting the number of verticals that one corporation can dominate.

While all this is going on the information service players of today are playing a three card game of risk. I hear this dialogue every day and it goes like this:

STAGE 1  “We now have good business in selling data into process – but the data is very commoditized and the value is in the software which holds it, searches it and provides the end-user access and workflow. We had that stuff written under contract because it was too risky to think of owning it or developing it in house – we have no experience of software or of managing it! And, looking at the contract we drew up with the supplier, we appear to own very little. So the time has come to invest in software, manage our own solutions and just hope that we can cope with the constant iteration of solutions. We will buy our supplier!”

STAGE 2 “This is more difficult than we thought. The innovation that we want is taking place outside of the range of the outfit we bought. If we are to continue to innovate in the face of rapidly developing user expectations (and that is the problem, not competition from our peers) we need to work with higher level suppliers in areas like semantic web, entity extraction etc. So lets do different deals: not sub-contracts and licensing this time, but Strategic Partnership, with exclusivities in certain areas and revenue and/or margin sharing. We will incentivize these people to greatness – but which one do we choose and what criteria do we use to select them?”

STAGE 3  “Well, the strategic relationships are working fine, but these software guys are eating our margins. And they say that all we have to do is update, while they have to re-invest, and 90% of the value in the package is software. And can they buy us? And their toolkit, honed on our clients to whom we did the selling, is now so valuable that IBM are trying to buy them …and maybe us as well. What do we do now, except grin all the way to the bank?”

There are three critical big picture issues that I take away from all of this:

* If the information services industry succeeds it will one day attract the attention of the major Enterprize software players. If this is so, we need to make our own luck and form relationships now. I see this taking place around Oracle in some sectors, and IBM in others.

* Most relationships between content houses and software houses begin with improvements to the data, content, internal workflow of the content player. But the content players end user/client is also vitally in need of systems for handling his content, and other third party content which he has already licensed, and in making it compatible with the workflow solution he is buying. There should be rich pickings here for both the content and the software players in terms of referrals and commissions. Somehow it isn’t happening, but if it did it would iron out some of the creases in those Strategic Alliances.

* Consultancy and customization are the keys to the solutioning marketplace. Trying to sell one-size fits all never quite does it in terms of repeat business. Yet most of the participants seem to dislike both of those elements, yet they are the best protection so far known to man for the defence of niche positions.

Next week, back to the coalface!

« go back