Mr Bezos and his cohorts at WaPo (the Washington Post to you and I, earthlings) have decided, we are told, on an aggregation model. As far as skilled translators of Bezos-speak can tell, this will mean bringing together licensed news content from all over the globe – spicy bits of the Guardian, a thin but meaty slice of the London Times, a translated and processed sausage segment of FAZ, a little sauerkraut from Bild, a fricasse of Le Monde… well, you get the picture, and the fact that I am writing this before dinner. These ingredients will get poured into a WaPo membership pot, heated and served to members who want to feel that they are on top of global goings-on, from the horses mouth, and without having to read the endless recyclings and repetitions which characterize the world media at source.

Well, I see the point, and the idea of membership and participation seems to me one which has endless energy these days. But I have been thinking for several years now that the Aggregation business model as experienced from 1993 onwards on the Web is on its last legs. Believing that “curation” is too often a word which we use when we are trying to maintain or defend a job, I have tried to steer clear of imagining that storage, the ultimate network commodity, was a good place to start building a business. In the early days of the Web it was certainly different. Then we could create the whole idea of the “one stop shop” as a way of simplifying access and reducing friction for users. All of the things we were collecting and storing, for the purposes of aggregation, were in fact “documents”, and their owners wanted them to be stored and owned as documents, bought as documents and downloaded as documents. The early reluctance of STM publishers to apply DOI identity beyond the article level and make citations, or references or other document sub-divisions separately searchable seems in retrospect to demonstrate the willingness of IP owners to manipulate the access to protect the business model.

Three clear developments have comprehensively undermined the utility of content aggregation:

* the desire of users to move seamlessly from one part of one document through a link to another part of a different document seems to them a natural expression of their existence as Web users – and in the content industries we encouraged this belief.
* the availability of search tools in the Web which permit this self-expression simply raises the frustration level when content is locked away behind subscription walls, and increases the likelihood that such content will be outed to the Open web.
* the increasing use of semantic analysis and the huge extension of connectivity and discoverability which it suggests makes the idea that we need to collect all or sufficient content into a storehouse and define it as a utility for users just by the act of inclusion a very outdated notion indeed.

It seems to me that for the past decade the owners of major service centres in the aggregation game – think Nexis, or Factiva, or Gale or ProQuest – have all at various times felt a shiver of apprehension about where all of this is going, but with sufficient institutional customers thinking that it is easier to renew than rethink, the whole aggregation game has gone gently onwards, not growing very much, but not declining either. And while this marriage of convenience between vendors and payers gives stability, end users are getting frustrated by a bounded Web world which increasingly does not do what it says on the tin. And since the Web is not the only network service game in town, innovators look at what they might do elsewhere on internet infrastructure.

So, if content aggregation seems old-fashioned, will it be superseded by service aggregation, creating cloud-based communities of shared interests and shared/rented software toolsets? In one sense we see these in the Cloud already, as groups within Salesforce for example, begin to move from a tool-using environment to user-generated content and more recently the licensing of third party content. This is not simply, though, a new aggregation point, since the content externally acquired is now framed and referenced by the context in which users have used and commented upon it. Indeed, with varying degrees of enthusiasm, all of the great Aggregators mentioned above have sought to add tools to their armoury of services, but usually find that this is the wrong way round – the software must first enhance the end user performance, then lend itself to community exploitation – and then you add the rich beef stock of content. For me, Yahoo were the guys who got it right this week when they bought Vizify (www.vizify.com), a new way of visualizing data derived from social media. This expresses where we are far more accurately than the lauded success of Piano Media (www.pianomedia.com). I am all for software companies emerging as sector specialists from Slovakia onto a world stage, but the fact that there is a whole industry, exemplified by Newsweek’s adoption of Piano this week, concerned with building higher and harder paywalls instead of climbing up the service ladder to higher value seems to me faintly depressing.

And, of course, Mr Bezos may be right. He has a good track record in this regard. And I am told that there is great VC interest in “new” news: Buzzfeed $46m; Vox $80 m; Business Insider $30m, including a further $12m last week: Upworthy $12 m. Yet I still think that the future is distributed, that the collection aggregation has a sell-by date, and that the WaPo membership could be the membership that enables me to discover the opinions of the world rather than the news through a smartly specialized search tool that exposed editorial opinion and thinking – and saved us from the drug of our times – yet more syndicated news!


Comments

Name (required)

Email (required)

Website

Speak your mind

 

1 Comment so far

  1. The Odd Future of Aggregation « P U B L I S H I N G on March 24, 2014 10:53

    […] more: davidworlock.com […]