The General Index poses a publisher question

Science advances by virtue of standing on the shoulders of giants , but sometimes you need a stepladder. Longtime public access activist  Carl Malamud believes he is providing one in his newly launched ( 7 October ) General Index , a way of filleting scientific knowledge and spitting out the essential bones which may yet rival SciHub , the Azerbaijan-based pirate site of full text science articles , as the no-cost way to search scientific literature without paying publishers for the privilege . In a world of pinched science budgets this may be appealing . Even more appealing may be the thought of getting to the essence without full text searching and the elimination of false leads and extraneous content. 

It used to be a joke that one day the metadata around science research  articles would be so good that you could pursue most searches through the metadata without troubling yourself with the text of the article . Indeed , in some fields , like legal information , the full text of cases could be a nuisance and concordances , citation indexes and other analytical tools could be used to get quickly to the nub of the question . Today these are built into the search mechanism and the search strategy . Mr Malamud has a long history in public and legal information ( see public.Resource.Org , his not for profit foundation and publishing platform ). At one point he challenged Federal law reporting on cost and campaigned to become U S Printer . But he is a very serious computer scientist and his target now is the siloed , paywalled world of non-Open Access science publishing . And the point of attack is both shrewd and powerful . 

The weakness of the publishers is that their paywalled information cannot be searched externally in aggregate in a single , comprehensive sweep . Just like SciHub , Mr Malamud enables “ global” searching to take place . He has built an index . Currently he covers 107 million articles in all of the major paywalled journals . He has indexed n-grams – single words and words in groups of 2, 3, 4, and 5 . He has built metadata , IDs and references to the journals . And , he claims , he has done this without beaching anyone’s copyright . He points out that facts and ideas are not copyright , and that his index entries do not attract copyright since they are too short to be anything else but fair dealing . Publishers will no doubt try to test this legally , probably in the US or UK since common law jurisdictions look more favourably on economic rights . In the meanwhile it is worth pondering the words of part of his publication statement:

“The General Index is non-consumptive, in that the underlying articles are not released, and it is transformative in that the release consists of the extraction of facts that are derived from that underlying corpus. The General Index is available for free download with no restrictions on use. This is an initial release, and the hope is to improve the quality of text extraction, broaden the scope of the underlying corpus, provide more sophisticated metrics associated with terms, and other enhancements.”

It is very clear from this that science publishing , if it attacks the General Index , is going to do so on very tricky grounds . Looking like monopolists is nothing new , but actually persuading researchers that they are instrumental in building reputation and career advancement weakens as an argument when the publisher is being pilloried for restricting access to knowledge . Building a new business in data solutions and analytics is a road that several have taken , but only the largest are very far advanced . This might be a time for the very largest to get together to discuss grouping services for researchers , but free , and without anti-trust implications ? Old style subscription journal publishing is getting boxed into a corner , with Open Platform publishing advancing quickly now , with applications like Aperture Neuro ( ) and work like the Octopus research project at UKRI that I have mentioned previously . 

In all of this , Data , the vital evidential output from research and experimentation , remains neglected . Finding a business model for making data available , marked up , richly enhanced with metadata and fully machine to machine interoperable remains a key challenge to everyone in scholarly communications . Even when Mr Malamud’s 5 terabytes of data ( compressed from 38 ) is installed it will only be a front end steering device to guide researchers more quickly to their core concerns – and those will eventually result in looking at the underlying data rather than the article . 

The references below include a Nature article with the only comment from a major publisher that I have seen so far . I wonder if they are talking about it in Frankfurt!



Name (required)

Email (required)


Speak your mind