Nov
23
Here be Giants
Filed Under B2B, Blog, Financial services, Industry Analysis, internet, mobile content, news media, Publishing, Reed Elsevier, Thomson, Uncategorized, Workflow | 2 Comments
The emerging world of network publishing, if found on the map of a medieval cartographer, might well be seen as a land of giants. A land far larger than that occupied by the noisy contestants of consumer publishing, or the sad and plaintive territory once dedicated to Music, it is built on foundations of business and professional information. No one writes about it much in the press or public media (they have survival problems of their own) but if what is currently taking place on the networks does not produce the expected benefits for global businesses over the next five years, then the ability of the world economy to grow out of recession and still keep control of commercial practices will be inhibited and delayed, with adverse effects on all of us.The promised B2B revolution has to deliver real benefits in improved productivity (more done by less people with improved outcomes), better decision-making (greater security around getting all the factors lined up, and weighting them effectively) and more cost-effective compliance (greater assurance that best practice and regulatory frameworks can be implemented in practice and audited). This is a tough call, but for those who can do it there will be rich rewards.
And in this land there are Giants, and their consolidation is taking them to new places, far away from the craft practices that we might designate as “publishing”. The largest players are consumed by the idea of workflow, and not at any trivial level of integration either. In the last six months I have been privileged to walk the territory, map in hand, sometimes only vaguely recognizing a terrain that I first explored 25 years ago, and many times since, and sometimes getting re-acquainted with old features in entirely new contexts. My conclusion is that Thomson Reuters and Reed Elsevier are now at a transformational stage, that there is blue water between them and the rest of the marketplace, and that if they are able to complete the transformation at significant scale they will tap into an area of margin and revenue growth that exceeds expectations in the information services sector at present. Meanwhile their former sector competitors are still stumbling around trying to redress the past by taking content online, re-inventing advertising models and awaiting the rebirth of format publishing in the networks. It will not happen, and the wisest and best know it.
So what is happening at TR and RE which is so laudable? I have spoken of Thomson Reuters (Rebuilding Inside Out) as re-inventing the core of this huge company through the creation of workflow products and services that start by concentrating the core assets of clients and content across the legal-regulatory-financial services continuum and then creating workflow around governance, risk and compliance that first of all standardizes practices (and thus themselves become performance standards) and then become templates for applications that move out into every business marketplace. The starting points are viral, like tax and regulation. The transformation comes when these solutions, using a mix of TR content, client content and licensed third party content, become a standard enterprize software application that can be bolted into the network (internet/extranet/intranet/mobilenet) or run as SaaS in the Cloud.
This is policy -driven workflow, ensuring compliance and allowing systems to drive governance. But you can take a completely different view of this if your aspiration is to drive business functionality itself. Imagine you are in insurance markets, where RE’s LexisNexis dominate. Workflow is created around risk assessment: know your customer, verify his record, score and categorize him, reject or insure him, check his subsequent performance. The combination of Seisint and Choicepoint at Lexis, plus all the access to public record content, and then to media aggregation like Nexis, creates a bedrock solution (and one so solid that in anecdote it is possible to speak of a new market entrant into the US insurance market basing his market entry plan on this Lexis Risk platform). Where an information-fuelled solution creates the workflow format (and to do this strategic alliance can be important – Experian, for example, is a valued partner to Lexis in the US) this model can be replicated by Lexis in countless markets.
Indeed, it can be created in law markets themselves, Lexis and TR’s Westlaw have long since moved on from information as pure research in legal services. Support for law practice marketing was an early target of both. In research days in the past litigators were the great purchasers of access, so it is noteworthy to see Lexis moving on to litigation support systems – its CaseMap workflow model is now used by 99 of the AmLaw 100, the top litigators in the world. Even more noteworthy is the equal emphasis given to the business of law alongside the practise of law: when current plans are fully implemented in the next few months it is obvious that every sector/customer segment will have business workflow integration at each level of business activity (from billing and time management to marketing) as well as each sector of practise activity. So is Lexis a law publisher when it reaches this point? Or a fully integrated systems supplier with comprehensive solutions supporting all aspects of being a lawyer? (or, if our examples came from the science sides of these businesses, of being a researcher or a medical care provider).
One driver of these changes has been competition. Would these Giants be so impressive without each other. And yet these two Giants are showing that very different approaches are possible even in a very competitive field. As they grow there will be other competitors: are these two the allies or the competitors of IBM, or Oracle, or SAP? Dramatically, Lexis has integrated all of its workflow for lawyers with Microsoft Outlook and Word, to the huge benefit of workflow integration for lawyers (and a great gain for Microsoft as law practices are forced to upgrade). And this in turn demonstates the return to the underlying network, after the Flight to the Web in the late 1990s. Workflow by definition is not a flashy web-based offering, but a series of real internet-based applications installed within the firewall. It is this sort of consideration, as well as the Apps marketplace in consumer mobile, that makes Chris Anderson ponder about the future of the Web.
Mobile and the Devices will get built into all of this (witness TR’s BoardLink for the secure retention of board papers and director’s reports in a workflow tool for company secretaries and directors). I have a feeling that we are still closer to the beginning than the maturity of whatever this is, as well as a sense of wonderment that so many of these changes are happening around that most conservative of professions, the Law. These two players now stand well apart from the chasing pack, and have both done what seemed so unlikely only a few years ago: created a cadre of experienced managers who know digital content and business models backwards, who truly know their own customers, and who have the technical support to make good decisions. It is here, not with Jobs and Murdoch trying to write a new future for the exhausted newspaper format via the Daily, that the real future of “network (electronic) publishing” lies.
Jul
8
Gribbling in the Dark
Filed Under Blog, Education, Industry Analysis, internet, Publishing, semantic web, STM, Thomson | Leave a Comment
So there was a word for it after all. Some kindly soul at a conference last week, seeing that I was unable to describe the strange digital burbling that took place when you dialled up a database in 1979 and inserted the telephone handset into the accoustic coupler, kindly shouted out the correct expression – the noise was “gribbling” and I was delighted to be reunited with a term which should never have been lost. And it allows me to remark, if I have not lost you already, that it is a mature industry whose terms of art, invented for a purpose, have now fallen into disuse because the processes they describe have become redundant. I expect to have to explain to my children how my typographer’s ruler works, or what slug setting, or galleys, or heavy leading or hot metal meant. The fact that the first generation digital expressions are already themselves redundant (who last saw an accoustic coupler?) tells an important story.
And that story is particularly relevant to the fascinating conference that I was attending. Last week’s seminar on “Ready for Web 3.0?” organized by ALPSP and chaired by Louise Tutton of Publishing Technologies was just what the doctor ordered in terms of curing us of the idea that we still have time to consider whether we embrace the semantic web or not. It is here, and in scholarly publishing terms it is becoming the default embedded value, the new plateau onto which we must all struggle in order to catch our breath while building the next level of value-add which forms the expectation of users coming to grips with a networked information society today. And from the scholarly world it will spread everywhere. I will put my own slides from the introductory scene-setting on this site, but if you can find any of the meaty exemplar presentations from ALPSP (it is worth joining them if they are going to do more sessions of this quality) or elsewhere then please review them carefully. They are worth it.
Particularly noteworthy was a talk by Professor Terri Attwood and Dr Steve Pettifer from the University of Manchester (how good to see a biochemistry informatician and a computer scientist sharing the same platform!). They spoke about Utopia Documents, a next generation document reader developed for the Biochemical Journal which identifies features in PDFs and semantically annotates them, seamlessly connecting documents to online data. All of a sudden we are emerging onto the semantic web stage with very practical and pragmatic demonstrations of the virtues of Linked Data. The message was very clear: go home and mark-up everything you have, for no one now knows what content will need to link to what in a web of increasing linkage universality and complexity. At the very least every one who considers themselves a publisher, and especially a science publisher, should read the review article by Attwood, Pettifer and their colleagues in Biochemical Journal (Calling International Rescue: Knowledge Lost in the Literature and information Landslide http://www.biochemj.org/bj/424/0317/bj4240317.htm) Incidentally, they cite Amos Bairoch and his reflections on Annotation in Nature Precedings (http://precedings.nature.com/documents/3092/version/1) and this is hugely useful if you can generalize from the problems of biocuration to the chaos that each of us faces in our own domains.
Two other aspects were intriguing. Utopia Documents had some funding from the European Commission, EPSRC, BBSRC, the University of Manchester and, above all, the BJ’s publisher, Portland Press. One expects the public bodies to do what they should be doing with the taxpayer’s cash: one respects a small publisher putting its money where its value is. And in another session, on the semantic web collaboration between the European Respiratory Society and the American Thoracic Society, called felicitously “Breathing Space”, we heard that the collaborators created some 30% of the citations in respiratory medicine, and that their work had the effect of “helping their authors towards greater visibility”. Since that is why the industry exists, it would seem that the semantic promise underpins the original publication promise. Publishers should be creating altars for the veneration of St Tim Berners Lee and dedicating devotions to the works Shadbolt and Hall, scholars of Southampton.
Sadly they are not, but coming out of this day of intense knowledge sharing one could not doubt that semantic web, aka Linked Data, had arrived and taken up residence these several years in scientific academe. Now if it will only bite government information and B2B then we shall be on our way. And, as Leigh Dodds of Talis reminded us, we shall have to learn a new language on that way. Alongside new friends like ontologies and entity recognition and RDF, add RDFa, SKOS (simple knowledge organizing systems to you!), XCRI education mark-up, OpenCalais (go to Thomson Reuters for more), triples, Facebook Open Graph, and Google Rich Snippets. Even that wonderful old hypertext heretic Ted Nelson got quoted later in the day: “Everything is deeply intertwingled”. And lets remember, this is not a “lets tackle these issues at our own pace when we think the market is ready” sort of problem: it is a “we are sinking under the weight of our own data and the lifeboat was needed yesterday” sort of a problem. Publishers must tackle it: if we learn how to resolve it without intermediaries then we certainly shall not need publishers.
« go back — keep looking »