At one point in his Rural Rides of 1819 , reporting on England in the fearful agricultural depression that followed the end of the Napoleonic Wars , William Cobbett found himself crossing the flat plain  of Heathrow outside of London . Seeing how the new fashion of grazing for meat production had taken hold , resulting in the heath  being enclosed into fields and the people turned off the land in favour of the animals , he commented that Heathrow was “ a land where sheep do eat men “ . Several centuries later , sitting in the airport on the same site , I can recall setting off for my last pre-pandemic in-person Frankfurt Book Fair – my 51st and last – and determining that i would ask every scholarly communications publisher I met what the balance was between readership by machines and readership by human beings . I was sadly disappointed , but not surprised, in the results . Only a few had any idea , and even those could not distinguish between access by bots for indexation and reference  and access for machine learning and other  intelligent purposes . I thought then that if publishers really were as user-centric as they said that they were , then they would pay far more attention than they did to adding and improving the metadata and the entity extraction that made articles , reviews and books much more machine-digestible , since given the volume of research findings , commentary , blogs , posters , presentations and other material in every research sector , the chances of anyone being able to assess research unless it was in an intelligent  machine interface was getting fairly remote .

As it turns out ( though not unusually )  I had the question upside down . I should have been asking how much published material was produced by machines using machine intelligence , and then subsumed into articles notionally written by groups of people . Dull and boring ,though vital ,work has been going this way for years . References , literature listings , citations – all well-formatted elements can be far more easily assembled from the machine’s intelligence , especially in the age of the electronic lab notebook . Research based on or validated by text mining or advanced data interrogation techniques becomes  an interplay between machine  intelligence and researcher intelligence . In an age when artificial intelligence ( as in the UNSILO peer review pre-checks for publishers like Springer Nature) can take the effort and time  out of peer review and purify it into an act of judgement , no one would be very surprised when machines effectively write research articles . After all , they do all the grunt work already . 

With some in scholarly communications it remains an axiom that the sector is the bellwether of technological change . Perhaps that is less true in this case . “Data storytelling “ and “ data narrative “ have been live concepts ever since a group of computer scientists from NorthWestern created a company called Narrative Science a decade ago . Their work with early users, as with others using Automated Insights at places like   the Associated Press showed that standard reporting of easily formatted and predictable material – college baseball and football results , for example – made automated journalism quickly acceptable . The academic researchers who have looked at the issues ( Karlstad Universitet, Sweden*) report that most of us cannot tell the difference . Extending the use of machine written financial services reporting , from early experiments at companies like Hanley Wood five years ago , now reaches out to the recent announcement at Meltwater that they were using machine intelligence to write investment analysis . And anyone who doubts the ability of intelligent machines to more effectively summarise reporting should look at the analysis on the Primer.ai blog . 

So will we see research articles substantially or even wholly written by smart machines ? Certainly , and soon , if it is not happening already . And it will help speed up up reporting while freeing researchers to research – this is not a replacement issue , unlike MSN where some news reporters were replaced in September 2020 by automated news gatherers . But I am left with the thought that all changes to the automation of these workflows have knock on effects . Will the ability of the computer to create the research report connect to all of the other intelligent machines waiting to analyse that result and lodge its impact and status in their systems ? Or , in other words , will scholarly communication become a machine to machine environment , with the only thing reading everything in detail being …a machine ? And if that should become the case , Quis Custodiet Ipsos  Custodes ?

* Nord, L., Karlsson, M., & Clerwall, C. (2017b). The public doesn’t miss the public : Views from the people: Why news by the people? Journalism – Theory, Practice & Criticism. Epub ahead of print. https://doi.org/10.1177/1464884917694399

Nord, L., Karlsson, M., & Clerwall, C. (2017a). Taking Stock of Transparency Tools in Journalism : Lessons learned from Swedish citizens. Presented at the Future of Journalism, Cardiff. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31541


Comments

Name (required)

Email (required)

Website

Speak your mind