Nov
1
Members of the House of Peers
Filed Under Blog, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Uncategorized, Workflow | Leave a Comment
Not another note on Open Access, surely? Well, I am sitting here on 31 October reading an article published on 1 November (how up to date can a blogger be?) in the Educause Review Online (www.educause.edu/ero/article/peerj-open-access-experiment) and I really want to convey my respect for people like Peter Binfield, who wrote it, for their huge energy and ingenuity in trying to make Open Access work. Peter’s note, “PeerJ: An Open-Access Experiment” describes the efforts that he and his PeerJ colleagues have put into the business of creating fresh business models around Open Access, which was borne without one and has always seemed to its adherents to need to be cloaked in one. Open Access has proved a far from lusty infant in many ways, but those who continue to adhere to the cause seem to feel, in their admirable and unfailing optimism, that some small tweak will suddenly create economic salvation and thus a take off into sustainable business growth.
In the case of PeerJ, the take-off vehicle is going to be a membership model. Peter Binfield co-founded the outfit in June 2012 with Jason Hoyt, former Chief Scientist at Mendeley, but the model that they feel will work owes nothing to smart algorythms. Instead, they simply launch themselves at the Author Processing Charge (APC), the way in which Gold OA has been sustained so far, and replace it by – a subscription. Now this is admittedly a personal subscription, levied on all article contributors (that is where the volume lies – in multi-authoring) and subscribers – or members as they would wish to describe them – can then continue to publish without further charges as long as they keep up their membership fees. Of course, if they join with new teams who have not previously been members then I presume we go back to zero, until those contributors are also members with a publishing history. Each contributor who pays a membership fee of $299 can publish as often as he likes: a nominal $99 contribution allows you one shot a year.
PeerJ have assembled a peer review panel of 700 “world class academics” for peer review purposes and intend to open for submissions by the end of the year. In a really interesting variation on the norm, they have put a PrePrint server alongside the service, so submissions will be visible immediately they are considered. It is not clear how much editorial treatment is involved in these processes, or indeed what “publishing” now means in this context, or indeed when a submission appears on the pre-print server. But one thing is very clear: this is not going to be peer review as it once was, but simply technical testing of the type pioneered by PloS One. Once it is established that the article conforms to current experimental good practice, then it gets “published”.
It is around this point in ventures of this type that I want to shout “Hold on a moment – do we really know what we are doing here?” I am sure that I will be corrected, but what I can currently see is a huge dilution of the concepts of “journals” and “publishing”. PeerJ starts with no brand impact. It is not conferring status by its selectivity, like Nature or Cell, or even some brand resonance like PloS. And its 700 experts, including Nobel Laureates, are being asked if the enquiry methodology was sound, not whether the result was good science or impacted the knowledge base of the discipline. PeerJ should be commended for allowing reviews by named reviewers to be presented alongside the article, but, fundamentally, this seems to me like another ratcheting downwards of the value of the review process.
Soon we shall hit bottom. At that point there will be available a toolset which searches all relevant articles against the submitted article, and awards points for fidelity to good practice or for permissable advances on established procedures. Articles where authors feel they have been misjudged can re-submit with amended input. The device will be adopted by those funding research, and once the device has issued a certificate of compliance, the article, wherever it is stored, will be deemed to have been “published”. There will be no fees and no memberships. Everything will be available to everyone. And this will introduce the Second Great Age of Publishing Journals, as the major branded journals exercise real peer review and apply real editorial services.
But something has changed now. The Editors of the Lancet or Nature or Cell have decided, in my projection, not to entertain submissions any longer. Instead they will select the articles that seem to them and their reviewers most likely to have real impact. These they will mark up to a high level of discoverability, using entity extraction and advanced metadata to make them effectively searchable at every level and section and expression within the article. Authors will have a choice when they are selected – they can either pay for the services up front or surrender their ownership of the enhanced version of the article. Since the article will be available and technically assessed already, spending more on it will seem fruitless. So we shall return to a (much smaller but equally profitable) commercial journals marketplace. Based once again on selectivity and real, expensive peer review.
Experienced readers will have already spotted the flaw. With wonderful technologies around like Utopia Documents and other new article development activities (Elsevier’s Article of the Future) surely the new age of the article can only exist until these technologies are generalized to every institutional and research programme repository. That is true – but it will take years, and by that time the publishers will be adding even higher value features to allow the researcher’s ELN (Electronic Lab Notebook) full visibility of the current state of knowledge on a topic. Beyond that, we shall consider articles themselves too slow, and inadequate for purpose, but that is a discussion for another day.
Oct
9
Horizontally Searching the Vertical
Filed Under Big Data, Blog, eLearning, Industry Analysis, internet, mobile content, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | 3 Comments
No, I did not go to STM in Frankfurt. Or ToC for that matter. I spent the day in bed, instead, trying to clear an infection and raging at hotels and their procedures. Big American chain hotels. Run by Germans. “We cannot check you in without you paying for all of the room occupancy in advance”. “But I am a long-standing member of your loyalty scheme…” “It doesn’t matter – the new rule is cash in advance before you get a key…” “Now my key has stopped working…”. “You can get a new one if you show me photo ID…” “But it is locked into the room… .” “No excuse, all Germans are expected to carry photo ID at all times…” “Mr Manager, I want to be catered for as an individual with a good credit record in your hotels…” “Sir, I am simply following our rules and policies.” The sooner this man is replaced by some workflow software the better, in my view – since he cannot vary any procedure then this would be a logical step. In the meanwhile I vent my spleen on TripAdvisor and ponder the wonder of Elsevier and the Article of the Future.
Yesterday Elsevier Science Direct unveiled a new web HTML version of the articles held in Science Direct. And having once played a tiny role as a judge in the unfolding Article of the Future story I have a sympathy for what they are about, and a conviction that they are as close to the money as anyone in taking scholarly communication in science to the next step. Here is what they say their new article format will achieve:
- Downloadable tables and PowerPoint presentations that encourage re-use and citation
- The ability to view references with their abstracts, without losing place in the current article
- Additional context from related and citing articles as well as from external databases
- Discipline-specific content, interactive visualization and workflow applications
Now please read back through that again carefully. The first is a no brainer. Good market research companies like Euromonitor have been facilitating re-use in this way for some time, and I saw a great application in a different discipline only last week. We should be asking why only now and why Elsevier are the first. The second is more specific, and removes a long-standing annoyance. But it is the third and fourth items which really get me going. Searching horizontally across articles in several different disciplines and moving seamlessly between references and citations is becoming key to the dream of a rational discovery system for science. Science will have its Big Data solutions alright (though very few current publishers will be participants, it seems). But the survival hope for the so-called “journal publishers” is sure that the article will come to be seen as a viewer, or a vorspeisen as we would say in this hotel room. In this vision the article becomes the way of entering the enquiry at one point and then moving horizontally, using references, citations and contextual information, across sub-disciplines and into fields of conjectural interest. Yes, we shall have more effectiveness from the semantic search, and, yes, our advanced taxonomies and ontological structures will do much of the back-breaking stuff. But where the quirky human mind of the researcher is the search engine, we shall want the article to be linked to the relevant evidential data, to the blogs and the posters and the proceedings and the powerpoints and the minutiae of scholarly research, just as we shall want the invaluable navigational aid of A&I to stop us from wasting time with what does not merit attention, and review articles to help us see what others have seen before us.
So we need the Article of the Future. And as the article adds more diverse content by way of embedded video, more images, manipulable graphs, links to evidential data or attached programming, we need it urgently. But it is an emerging standard, so how many do we need? Well, only one. As Elsevier remark, this is and always will be work in progress. But it is work in everyone’s progress. If I have subscriptions with Springer, Sage, Wiley Blackwell and Elsevier, to name but a few, will I not want all articles to be manipulable and cross-searchable in every way, not just cross-referenced in Google? So is this not a point where industry collaboration in the face of overwhelming user dissonance might for once pay dividends?
I have just reread that sentence with a sinking feeling. This industry seldom collaborates. What could happen as a result is, if Elsevier are strategically adroit and make their Article format Open, that users adopt it for their own repositories, loaded onto vehicles like Digital Science Figstore. Then users can launch a search from Science Direct and get everything. Of course, as long as “publishing” equals “tenure and research rating” people will still seek journal publication, but those links are fragile and may collapse under their own weight. Research through searching threatens to become another matter, divorced from the publication cycle. If that happens there will be few survivors in current publishing, but Elsevier at least should be one. Everyone else needs to think hard about collaboration, or plan for business diversification for all they are worth.
My quote of the week? “The solutions will come when science goes mobile in research terms – then someone will have to step in to configure the device and all these publishers will simply be rewarded with royalties for their content contribution to a solution they did not make and cannot control.”
Sounds a bit like Apple meets STM!
« go back — keep looking »