Jan
9
Post-Pub and Preprint -The Science Publishing Muddle
Filed Under B2B, Big Data, Blog, data analytics, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Uncategorized, Workflow | 2 Comments
New announcements in science publishing are falling faster than snowflakes in Minnesota this week, and it would be a brave individual who claimed to be on top of a trend here. I took strength from Tracy Vence’s review, The Year in Science Publishing (www.the-scientist.com), since it did not mention a single publisher, confirming my feeling that we are all off the pace in the commercial sector. But it did mention the rise, or resurrection, of “pre-print servers” (now an odd expression, since no one has printed anything since Professor Harnad was a small boy, but a way of pointing out that PeerJ’s PrePrints and Cold Spring Harbor’s bioRxiv are becoming quick and favourite ways for life sciences researchers to get the data out there and into the blood stream of scholarly communication). And Ms Vence clearly sees the launch of NCBI’s PubMed Commons as the event of the year, confirming the trend towards post-publication peer review. Just as I was absorbing that I also noticed that F1000, which seems to me to still be the pacemaker, had just recorded its 150,000th article recommendation (and a very interesting piece it was about the effect of fish oil on allergic sensitization, but please do not make me digress…)
The important things about the trend to post-publication peer review are all about the data. Both F1000 and PubMed Commons demand the deposit or availability of the experimental data alongside the article and I suspect that this will be a real factor in determining how these services grow. With reviewers looking at the data as well as the article, comparisons are already being drawn with other researcher’s findings, as well as evidential data throwing up connections that do not appear if the article alone is searched in the data analysis. F1000Prime now has 6000 leading scientists in its Faculty (including two who received Nobel prizes in 2013) and a further 5000 associates, but there must be questions still about the scalability of the model. And about its openness. One of the reasons why F1000 is the poster child of post publication peer review is that everything is open (or, as they say in these parts, Open). PubMed Commons on the other hand has followed the lead of PeerJ’s PubPeer, and demanded strict anonymity for reviewers. While this follows the lead of the traditional publishing model it does not allow the great benefit of F1000: if you know who you respect and whose research matters to you, then you also want to know what they think is important in terms of new contributions. The PubPeer folk are quoted in The Scientist as saying in justification that “A negative reaction to criticism by somebody reviewing your paper, grant or job application can spell the end of your career.” But didn’t that happen anyway despite blind, double blind, triple blind and even SI (Slightly Intoxicated) peer reviewing?
And surely we now know so much about who reads what, who cites what and who quotes what that this anonymity seems out of place, part of the old lost world of journal brands and Open Access. The major commercial players, judging by their announcements as we were all still digesting turkey, see where the game is going and want to keep alongside it, though they will farm the cash cows until they are dry. Take Wiley (www.wiley.com/WileyCDA/pressrelease), for example, whose fascinating joint venture with Knode was announced yesterday. This sees the creation of a Knode – powered analytics platform provided as a Learned Society and industrial research service, allowing Wiley to deploy “20 million documents and millions of expert profiles” to provide society executives and institutional research managers with “aggregated views of research expertise and beyond”. Anyone want to be anonymous here? Probably not, since this is a way of recognizing expertise for projects, research grants and jobs!
And, of course, Elsevier can use Mendeley as a guide to what is being read and by whom. Their press release (7 January) points to the regeneration of the SciVal services, “providing dynamic real-time analytics and insights into the… (Guess What?)… Global Research Landscape”. The objective here is one dear to governments in the developed world for years – to help research management to benchmark themselves and their departments such that they know how they rank and where it will be most fruitful to specialize. So we seem to be quite predictably entering an age where time to read is coming under pressure from volumes of available research articles and evidential data, so it is vital to know, and know quickly, what is important, who rates it, and where to put the most valuable departmental resources – time and attention-span. And Elsevier really do have the data and the experience to do this job. Their Scopus database of indexed abstracts all purpose written to the same taxonomic standard now covers some 21,000 journals from over 5000 publishers. No one else has this scale.
The road to scientific communication as an open and not a disguised form of reputation management will have some potholes of course. CERN found one, well-reported in Nature’s News on 7 January (www.nature.com/news under the headline “Particle Physics papers set free”. CERN’s plan to use its SCOAP project to save participating libraries money, which was then to be disbursed to force journals to go Open Access met resistance, but from the APS, rather than the for profit sector. Meanwhile the Guardian published a long article (http://www.theguardian.com/science/occams-corner/2014/jan/06/radical-changes-science-publishing-randy-schekman) arguing against the views of Nobel laureate Dr Randy Schekman, the proponent of boycotts and bans for leading journals and supporters of impact factor measurement. Perhaps he had a bad reputation management experience on the way to the top? The author, Steve Caplan, comes out in favour of those traditional things (big brands and impact factors), but describes their practises in a way which would encourage an un-informed reader to support a ban! More valuably, the Library Journal (www.libraryjournal.com/2014/01) reports this month on an AAP study of the half-life of articles. Since this was done by Phil Davis it is worth some serious attention, and the question is becoming vital – how long does it take for an article to reach half of the audience who will download it in its lifetime? Predictably the early results are all over the map: health sciences are quick (6-12 months) but maths and physics, as well as the humanities, have long duration half lives. So this is another log on the fire of argument between publishers and funders on the length of Green OA embargoes. This problem would not exist of course in a world that moved to self-publishing and post-publication peer review!
POSTSCRIPT For the data trolls who pass this way: The Elsevier SciVal work mentioned here is powered by HPCC (High Power Computing Cluster), now an Open Source Big Data analytics engine, but created for and by LexisNexis Risk to manage their massive data analytics tasks as Choicepoint was absorbed and they set about creating the risk assessment system that now predominates in US domestic insurance markets. It is rare indeed in major information players to see technology and expertise developed in one area used in another, though of course we all think it should be easy.
Dec
17
Access, Evaluation, Science – all Open?
Filed Under B2B, Blog, Industry Analysis, internet, Publishing, Reed Elsevier, Search, social media, STM, Thomson, Uncategorized, Workflow | 1 Comment
“When the Spin slips, change the name!” as British Spin Meister, Alistair Campbell, almost said but didn’t until I put the words into his ever-open mouth. When I look back over the past 15 years of science publishing, I see more spin and less change than I would ever have believed possible. Yet when I try to look forward 10 years I see a wave of fundamental change more threatening than the games we have been playing in these Open spaces. For me, a good proof of the failure of the almost political campaigning around Open Access to carry the day beyond some 12-15% of users (check the latest Outsell market report, Professor Harnad) is the name switch game – with PLoS now talking “Open Evaluation” and Academia.edu being used by 5 million scientists who believe in Open Science. The fundamental change is about self-publishing and post-publication peer review: this will upset the applecart of commercial publishing, if it does not adjust in time, and the ersatz Fundamentalists of the Open Access movement of a decade ago, who wanted to preserve peer review as much as they wanted to destroy commercial ownership and restriction.
Since we are talking Science, lets try an experiment. Take any other broken, mis-used and meaningless hackneyed term and place it in this context where “Open” is now in terms of Science and Access. For example, take “Socialist”. Or “Community”. Or even “Public”. See what I mean? All meaningless, or, like those eye tests, you see the same through each lens that the optometrist puts into the frame before your eye, and end up lying about the difference between this one and that – because there is no discernible difference but you do not want to disappoint. Real change is not to be described by this means. It concerns the wish of young scientists to be noticed in the network as soon as possible on completion of their work – and before that where conferences, posters, blogs and other mentions begin to build anticipation. Real scholarly communication is now available in several different flavours, from Mendeley to Academia.edu. Since I have been solemnly assured for 30 years by senior scientists and publishers alike that scientists will not share I have to be amazed by the size of these activities. These newcomers are not less worried about attaining research grants or tenure than their predecessors, but they live in a networked scientific world where if you are not quickly present in the network you are not referenced in debate – and being part of the argument is becoming as critical to getting grants and tenure as a solid succession of unread papers published two years after the research ended used to be.
These convictions are much strengthened by this week’s announcements. The announcement from F1000Research (December 12) that their articles are now visible in PubMed and PubMed Central gives a complete clue to what this is all about. Users want to publish in five days, but they want to be visible everywhere where a researcher/peer would expect to look. And increasingly they will expect that the article will collect into post-publication peer review all those earlier references in conference proceedings, blogs and elsewhere. So while people like F1000Reaeach will handle “formal” post-publication peer review, informal debate and commentary will not be lost. And the metrics of usage and impact will not be lost either, as we look so much more widely than traditional article impact to discern what this author/team/ findings/ideas have had. “Open Evaluation” from PLoS aims just there, as it recently launched its second evaluation phase from PLoSLabs (http://www.ploslabs.org/openevaluation/). This post-publication article rating system reminds me very precisely that PLoS One was not in any sense a traditional peer review process. It was a simple methodological check for scientific adequacy (“well-performed” science), and while the volume of processing solved a multitude of financial issues, the fuller rating of these articles still rests with the user. We shall see PLoS One as the turning point to self-publishing when the history is written.
And so we move towards a world where original publication of science articles is no longer the prerogative of the journal publisher. While review systems will flourish and abstracting and indexing will remain vital, that tangled mass the second and third tier journals, the most profitable end of traditional STM, will slowly begin to disperse. Some databases will adopt journal brands, of course, and the great brands will survive as ratings systems themselves. “Selected by Cell as one of the 50 most influential research articles of the year”, or “Endorsed by Nature as a key contribution to science” will be enviable re-publishing, increasingly with datalinks, improved access to image and video and other advantages. This is where semantic enrichment and data analysis will first become important – before it becomes the norm. But these selections will be made from what is published, not what is submitted for publication. And a clue to what the future offers was indicated by a Knovel (Elsevier) announcement this week. Six publishers with either small, high quality holdings in engineering research, or activities in engineering that can use the Knovel platform, entered into collaboration agreements to make their content available via the Knovel portal. Amongst these were Taylor and Francis (CRC Press), as well as specialists like ICE (Institute of Civil Engineers) or the American Geosciences Institute. As novel is in a directly competitive position with IHS GlobalSpec, it is relevant to ask how many engineering research portals that marketspace will need. It now has two – and I seriously doubt that there will ever be more than two aimed at both research and process workflow, though their identities may change (see Thomson-Reuters/Bloomberg/Lexis in law). Increasingly then small science publishing will be re-intermediated – and we do not need a business degree to imagine what that will do to their margins, as well as their direct contact with their users. “Open”, whatever else it means, connotes “contraction” for some people.
« go back — keep looking »