Two contrasted views of the future struggle against each other whenever we sit down to talk data strategy. One could be called the Syndication School. It says “forget getting all the data in one environment – use smart tools and go out and search it where it is, using licensing models to get access where it is not public.” And if the data is inside a corporate fire wall as distinct from a paywall? MarkLogic’s excellent portal for Pharmaceutical companies is an example of an emerging solution. 

But what happens if the data is in documented content files with insufficient metadata? Or if that metadata has been applied differently in different sources? Or three or four different sorts of content as data need to be drawn from differently located source files which need to be identified and related to each other before being useful in an intelligent study process. Let’s call this the Aggregation School – assembly has to take place before process gets going. But let’s not confuse it with bulk aggregators like ProQuest. 

And now put AI out of your mind. The term is now almost as meaningless as a prime ministerial pronouncement in the UK. This morning saw the announcement of three more really exciting new fundings in the Catalyst Awards series from Digital Science. BTP Analytics, Intoolab and MLprior are all clever solutions using intelligent analysis to service real researcher and industry needs. But the need to label everything AI is perverse: those who grew through 25 years of expert systems and neural networks will know the difference between great analytics and breakthrough creative machine intelligence. 

But while we are waiting, real problem-tackling work is going on in the business of aggregating multi- sourced content. The example that I have seen this week is dramatic and needs wider understanding. But let’s start with the issue – the ability, or inability, especially in the life sciences, for one researcher to reproduce the experiments created and enacted and recorded in another lab simply by reading the journal article. The reasons are fairly obvious – data not linked to article or not published; methodology section of article was a bare summary (video could not be published in article?); article only has abbreviated references section; article does not have full metadata coverage sufficient to discover what it does have; metadata schema used was radically different to other aligned articles of interest; relevant reproducibility data is not in article but in pre-print sever; conference proceedings; institutional or private data repositories: annotations, responses to blogs or commentaries, in code repositories; in thesis collections; or even in pre-existing libraries of protocols etc. And all or any of these may be Open, or paywalled.

In other words, the prior problem of reproducibility is not enacting the experiment by producing the same laboratory conditions – it is in researching and assembling all the evidence around the publication of the experiment. This time-consuming detective work is a waste of research time and a constraint on good science, and calling for AI does not fix it. 

But claim they are well down the arduous track towards doing so. And it seems to me both a fair claim and an object lesson in the real data handling problems when no magic wand technology can be applied. Profeza, an Indian-based outfit founded by two microbiologists, started with the grunt work and are now ready to apply the smart stuff. In other words they have now made a sufficient aggregation of links between the disparate data sources listed above to begin to develop helpful algorithms and begin to roll out services and solutions. The first, CREDIT Suite, will be aimed at publishers who want to attract researchers as users and authors by demonstrating that they are  improving reproducibility. Later services will involve key researcher communities, and market support services for pharma and reagent suppliers as well as intelligence feeds for funders and institutions. It is important to remember that whenever we think of connecting dispersed data sets the outcome is almost always multiple service development for the markets thus connected. 

Twenty years ago publishers would have shrugged and said “if researchers really want this they can do it for themselves. Today, in the gathering storm of Open, publishers need to demonstrate their value in the supply chain before the old world of journals turns into a pre-print sever before our very eyes. And before long we may have reproducibility factors introduced into methodological peer review. While it will certainly have competitors, Profeza have made a big stride forward by recognising the real difficulties, doing the underlying work of identifying and making the data linkages, and then creating the service environment. They deserve the success which will follow.

Dear reader, I am aware that I have been a poor correspondent in recent weeks, but in truth I have been doing something I should have done long ago: gaining some experience of AI companies, talking to their potential customers and reading a book. Lets start at the end and work backwards. 

The book that has eaten the last week of my life is Edward Wilson-Lee’s fine new publication, The Catalogue of Shipwrecked Books, which describes the eventful life of Christopher Columbus’ illegitimate son, Hernando, and his attempts to build a universal library of human knowledge. Hernando collected printed works, including pamphlets and short works, in an age when many Scholars then still regarded all print as meretricious rubbish. He built a catalogue of his collection, and then realised that he could not search it effectively unless he knew what was in the books, so started compiling summaries – epitomes – and then subject indexing, as well as inventing hieroglyphs to describe the physical properties. In other words, in the 1520s in Seville he built an elaborate metadata environment, but was eventually defeated by the avalanche of new books pouring out of the presses of Venice and Nuremburg and Paris. Wilson-Lee very properly draws many parallels with the early days of the Internet and the Web. 

As I closed this wonderful book, my mind went back to an MIT Media Lab talk in 1985 given by Marvin Minsky. We need reminding how long the central ideas of AI have been with us. At the end of his talk, the Father of AI kindly took questions, and a tame librarian in the front row asked “Professor, If you were looking back from some inconceivably distant date, like, say, 2020, what would surprize you that you have in 2020 but which we do not have now?”. After a thoughtful moment, the great man replies “Well, I guess that I would praise your wonderful libraries, but  still be surprized that none of the books spoke to each other”. At that he left the room, but from then the idea of  books interrogating books , updating each other and creating fresh metadata and then fresh knowledge in the process of interaction has been part of my own Turing test. So I find it easy to say that we do not have much AI in what we call the information industry. We have a meaningless PR AI, a sort of magic dust we sprinkle liberally (AI-enhanced, AI-driven, AI- enabled etc) but few things pass the “books speaking to books and realising things not known before” test.

And yet we can and we will. The key questions are, however: will current knowledge ownership permit this without a struggle, and will there be a dispute over the ownership of the results of these interactions? This battle is already shaping up in academic and commercial research, so it was dispiriting to find when talking to AI companies that it seems there is really no business model in place yet enabling co-operation. Partly this is a problem of perception. Owners and publishers see the AI players as technicians adding another tier of value under contract – and then going away again. The AI software developers see themselves as partners, developing an entirely new generation of knowledge engine. And neither of them will really get anywhere until we all begin to accept the implications of the fact that no one, not even Elsevier, as enough stuff in one place to make it work at scale. And while one can imagine real AI in broad niches — Life Sciences – the same still applies. And if we try it in narrow niches, how do we know that we have fully covered the crossovers into other disciplines which have been so illuminating for researchers  in this generation? In our agriscience intelligent system how much do we include on food packaging, or consumer market research, or plant diseases, or pricing data? 

So what happens next? In the short term it is easy to envisage branded AI – Elsevier AI, Springer Nature AI? I am not sure where this gets us. In the medium term I certainly hope to see some data sharing efforts to invest in AI partnerships and licence data across the face of the industry. It is true that there are some neutral players – Clarivate Analytics for example and in some ways Digital Science – who are neutral to the knowledge production cycle and have hugely valuable metadata collections. They could be a vital building block in joint ventures with AI players, but their coverage is still narrow, and in the course of the last month I even heard a publisher say “I don’t know why we let Clarivate use our data – we don’t get anything for it!”. 

Of course, unless we share our data we are not going to get anywhere. And given the EU Parliament rejection of data metering and enhanced copyright protection last week all these markets are wide open for for massive external problem solving – who remembers Google Scholar? The solution is clear – we need a collaborative model for data licensing and joint ownership of AI initiatives. We have to ensure that data software entrepreneurs get a payback and that investment and data licensing show proper returns, just as Hernando rewarded the booksellers who collected his volumes all across Europe. In a networked world collaboration is often said to the the natural way of working. It is probably the only way that AI can be fully implemented by the scholarly communications world. Hernando died knowing his great scheme had failed. AI will succeed if it shows real benefits to research and those who fund it. As it succeeds it will find other ways of sourcing knowledge if those who commercially control access today are not able to find a way of leading the charge, and not dragged along in its wake. 

keep looking »