Since I last wrote a piece here I am older by three conferences and an exhibition. And no wiser for having spoken twice on cyber-security, a subject that baffles me every time I stand up to talk about it. The simple truth is that the world is changing in the networks at a pace that bewilders, yet the visions we have of where we are going hang before us like a tantalising but currently unattainable vision. Thus, if you ask me about the future of education, I can spin you a glowing tale of individuals learning individually, at their own pace, yet guided by the learning journey layer out by their teachers, who have now become their mentors. The journey is self-diagnostic and self-assessing, examinations have become redundant and we know what everyone knows and where their primary skills lie. Or in academic or industrial research, projects are driven by results, research teams recruited on that basis, and their reputation is scored in terms of the value their peers set on their accomplishments. The results of research are logged and cited in ways that make them accessible to fellow researchers in aligned fields – by loading and pointing to evidential data, or noting results and referencing them on specialized or community sites, or by conventional research reporting. Peer review is continual, as research remains valid until it is invalidated and may rise and fall in popularity more than once. And so on through business domains, medicine and healthcare, agriculture and the whole range of human activity…

But at this point, when I talk about the growing commonality of vision, the role of workflow analysis, RPA, what happens next with machine learning, the eventual promise of AI, a hand shoots up and I find myself answering questions from the ex-CFO/now CEO about next years budget, and when will the existing IT investment pay back, and can this all be outsourced and surely we don’t need to do any more than buy the future when it arrives? And of course these questions are all very pertinent. We all need to assure revenues and margins next year if we are to see any part of this future. And next years revenues will come from products and services which will look more like last years than they do like the things we shall be doing in 2025, even if we had an idea of what those might be. It is one thing knowing something about the horizons, quite another to design a map to get there. So at every point we seek every way we can to buttress future-proofing, and at the moment I am seeing a spate of that in acquisitions. Just as last year putting the word “Analytics” at the end of your name (Clarivate, Trevino) added a billion to the exit valuation, so this year the dotai suffix has proved to be a real M&A draw.

But those big Analytics sales were made, and will be onsold to people who want to expand their data and services holdings. The .ai sales are transplants from the seedbed, and far earlier stages of transplantation are involved. Having worked for some years as an advisor to Quayle Munro (now, as an element of Houlihan Lokey, part of one of the largest global M&A outfits) I realise that smaller and smaller sales may not be considered a good thing, but I cannot resist the idea that seeking some future tech developments into your incubator environment is going to have some really beneficial long term effects. It already has at Digital Science. As Clarivate lerans from what Kopernio knows it will help . As the magic of Wizdom.ai rubs off on T&F, it will help there.

But, again, we are begging a hundred questions. Can you really future proof by buying innovation? Well, only to a limited effect, but by having innovators inside you can learn a lot, at least from their different perspective on your existing customers. Don’t you need to keep them from being crushed by the managerial bureaucracy of the rest of the business? Yes, but why not try to fee up the arthritic bits rather than treating the flexible bits? What if you have bought the wrong future tech? Even the act of misbuying will give you useful pointers the next time round, but if you have bought the right people they will be able to change direction. What if software people and text publishing people do not get on? They will need to be managed – this is your test – since if we fail the future will be conditioned entirely by software giants licensing data from residual fixed income publishers.

Are there any conditioning steps I should be taking to ease into this future? Yes, forget ease and go faster. Look first at your own workflow. To what extent is workflow automated? Do you have optimum ways of processing text? Are people or machines taking the big burdens on proof reading, or desk editing or manuscript preparation? Is your marketing as digital as it could be? Are you talking the language of services, and designing solutions for your users, or are you giving your users reference sources and expecting them to find the answers? Indeed, do you talk the language of solutions, or the ritual language of format – book, journal, page, article. Are we part of the world our users are entering, or are we stuck in the world they are exiting?  The exhibition I attended this month was the London Book Fair. I love it in all its inward-looking entrancement with itself, and its love affair with the title Publisher, the profession for which no qualification other than skill at explaining away unsuccess has ever been required. I can only take one day since I rapidly become depressed. But still there were very sparky moments – an impromptu discussion with the Chennai computer typesetter TNQ (www.tnq.co.in) about their ProofControl 3.0 service told me that these guys are on the ball. But moments like this were rare. More often I felt I was watching the future –  of the industry in 1945!

Let me clear the way for a flow of words by first apologising to a critic of my blogging method. Thanks AH, for your private communication. I am guilty as charged. I do indeed, tend to sit down and start writing about whatever seems important to me. No, what I write is not meant to be funny, though I can see that in a laugh-starved world it could happen by accident. And, no, not all my readers drop out after two paragraphs: almost thirty per cent of the hardy souls are still there after 15 minutes, and surely not all of those left the machine on my page while coffee making or answering an urgent call of nature. But I am flattered that you chose to write and please feel free to make  additional constructive comments on where I might locate myself. And do remember that not all Russian programmers are trying to bring down the civilized world as we know it!

And thanks to the rest of you while I got that off my chest. Let me now take you back to 1993. I am working on the first internet-related project that we ever received, and part of the work involves interviewing university librarians about their future. Most were unimpressed by technology arguments, but one, I suspect the great and long-sighted Mel Collier, most recently at Leuven, said that in a properly networked world the researcher and the graduate student and the undergraduate could all take the university library home with them, but it might look slightly different in terms of access and facilitation, according to who you were. And then, a week or so ago, I was talking to Jan Reichelt, co-partner in the creation of Mendeley and its former manager after the Elsevier acquisition. He and his colleagues are behind Kopernio, (www.kopernio.com) the plug in that allows researchers and others to record all of the permissions they have been granted by their libraries, take them home and use them as if they were sitting in the campus library building. And this is not unique – there are other systems like Unpaywall.org around the place. But if Kopernio gets the widespread adoption that I believe it will, then it is a gamechanger – in the psychology of the researcher/student, in the sense of where research may properly be done, and in the personality of the library in the minds of its users.

Publishers should be queuing up to work with Kopernio. In the age of SciHub and downloadable papers on ResearchGate, students who can find everything they need online while drinking cocoa at home, or researchers, who can use the library through the weekend to meet a project deadline, without infringing copyright but while increasing the satisfaction ratios of library contracts, are very valuable to librarian and publisher alike. And the fact that it has taken 25 years to get to this point underscores a very well understood relationship: perceptions of the extent of change in any networked domain are easy to make and, if we have enough imagination, the impacts can soon be appreciated and understood. Making the changes themselves takes two decades and more, and calibrating what those changes mean takes even longer. Truly the networked world is now very fast at identifying change, often very slow in adopting it fully and the hopeless  at anticipating what happens next.

But Kopernio is important in another sense. It marks a further stage in the elision of roles in scholarly communication. The library that the researcher takes home is not only Science Direct and Springer Nature – it is PubMed and Plos One, and of course the whole world of ab initio OA publishing. There will inevitably be a dominant technology in this space, and I can easily envisage Kopernio in that space. At that point, one of the major publishers, migrating anxiously away from journals towards research data workflow and management, will want to buy it to automate the permissions behind discovery processes in researcher workflow. And then we are on the verge of a much larger change process. The new style publisher will sell the research department the workflow software, the database technology, the means of connecting to stored knowledge, and, increasingly, the tools for mining, sifting and analysing the data. Some of these will be implemented at university, library or research project or individual level. But the outcomes of research in the form of reports, findings, communication with funders, and eventually research articles and publishable papers, will all be outputs of the research process software, and there will little point in taking the last named elements apart to send them to a third party for remote publication. There is no mystery about technical editing that cannot be accomplished in a library or a research department or by copying publishers and using freelance specialists online. And most other so—called publishing functions, from copy editing to proof reading are semi-automated already and will be fully robotic very soon.

There is a trail here which goes from Ubiquity Press  (www.ubiquitypress.com) to science.ai. And it is happening, under the odd label of Robotic Process Automation (RPA) in every business market that I monitor. It is not really AI – it is machine learning and smart rule-based systems which are far commoner than real AI. Back in 1993 we used to call internet publishing “musical chairs” – the game was to look at the workflows and decide, when the music of newly networked relationships stopped, who was sitting in whose chair. In those days we thought the Librarian was left standing up. But with the advantage of time I am no longer so sure. The Library seems on the verge of becoming the Publisher (note the huge growth of University Presses in the US and UK), while the former journal publisher becomes the systems supplier and service operator. Simply making third party content available may be too low a value task for anyone to do commercially in an age of process automation.

Now, Miss AH, was that  wildly funny? If it was, please tell me the joke! But I do promise more RPA next time.

« go backkeep looking »