I am staying in a (very good) hotel in Nashville, TN and in the next door room there is a dog. Not a huge one, I guess from the soprano bark, but a loud enough one to induce IBH (increased blogging behaviour) in me. This should all settle down and revert to normal next week, but the idea that is “dogging” me tonight, as both Dog and I seek sleep and relaxation, is this: in order to enjoy optimal content in a multiple mobile access point world do we alter the content, alter the devices, or alter the user experience.

First, some definition of terms. In the hotel lobby this evening I noticed device proliferation like never before. PC (concierge), laptops, iPad, smartphones, other tablets and PDAs. Clearly all can access the same content via the Web, but not have the same experience of it. I, for example, cannot access the Six Nations Rugby because my screen size is wrong in one source (and where the size is right, the vendor cannot sell me the content because of territorial rights – my credit card is registered in the wrong country). The rights question is one for another day: my issue this evening is how to free content from the device display limitation.

And in thinking through the problem my thoughts go back all the time to the article by Chris Anderson and Michael Wolff in Wired (http://www.wired.com/magazine/2010/08/ff_webrip/all/1) on the death of the Web and the ascendancy of the Internet. So we might say that these issues will be resolved on the internet by an App which the user downloads. This will interface with his content sources and optimize them for the device which is being used to access them. The appearance and treatment of content therefore becomes a part of the design interface of the internet, and nothing to do with the source publisher, who will “create” in a context that is a lowest common denominator allowing for the widest range of optimization. The Bland leading the Bland, perhaps, and certainly something which becomes more complex as we introduce more images, graphics, video and audio alongside text in increasingly multiple media services. Still, this is the user workflow approach, with the App allowing users to control their access mode.

If this world prevails, some of my publisher friends will run screaming into the street tearing their hair (though few of them have much of that). They want to re-assert the primacy of the Web, because they want to continue to control the customer in every way that they can. Already threatened by Apple, Amazon and Kobo, and only partly disarmed by the hope that Google Editions may prove an ally after all, many publishers see loss of control of the delivered appearance of their products as an ultimate separation from end users. They would want to have editionizing software that ran with the product, allowing you to see it differently according to the device you are using, but, within your licence, always able to ensure that what you were looking at was optimized to the device you were using to read it on. In this way the publisher of origin would be able to charge for the added value of multiple device usage as well as keep control of the licences conferred on end users.

This may not be an enduring problem, since the network will one day resolve it as an access condition. But in the meanwhile there are choices. And as it happens, we have what citizens here call a “bake-off” between the two opposing camps. In the red corner on my right, please meet Flipboard Pages (http://flipboard.com), who will take any page of published media you encounter on Twitter or Facebook, and reconfigure it to read properly on your access device. This is an App, and this is the beginnings of a workflow solution.

And in the Blue corner, on my left, meet newly launched TreeSaver (http://treesaver.net), a JavaScript solution for the publishing community to allow multiple device viewing of the same content in very different device contexts. It adjusts automatically to the context, and the portfolio of exemplars on its website work very impressively. This is the Web solution and represents the ways in which the content creation community will try to fight back. Add this, publishers will say, and it will justify higher prices for subscription or one-off products. Buy from us, the intermediaries will say, and you can have Grandson of Flipboard as standard on all our products and services.

As I say, this may not last forever, but, in every field of content, the next 24 months will see decisive battles on the business models of content marketplaces. Do Apple et al get to restructure the business or not? And if not, do the originating agencies retain control of the appearance as part of the battle to retain a direct connection with the consumer? What did not appear to be real issues six months ago are now front and centre. How can you keep your hair when all around you are losing their heads. And is this issue the Dog that Didn’t Bark in the Night?

I am still rolling across America, in a journey last week from San Diego to New York (again) for the DeSilva+Phillips Media  Dealmakers Summit at the Pierre, and now on to Nashville, Tennessee. More below on the conference, but first back to a theme started in my blog “News not fit to Print”.  I am becoming obsessed with the science around automated story development, and now see it everywhere I look. And everywhere I look I see a Western culture obsessed with fact-based journalism. As in Europe, much of the core material in reportage is statistical. Today is SuperBowl Sunday and the stats are coming down like dandruff, but I already wrote about Statsheets in the previous article so lets not go there. Instead, I have a copy of the Tennessean for 6 February in front of me. Lets try that.

First off, this is a good newspaper and nothing I say is intended to denigrate it. But the urge to “factualize” is all over it. Front page headline reads “Teaching immigrants is a growing challenge”. Apparently 22% of Metro Nashville public school students now need to learn English as a second language, compared to 15% in 2005. The city has, in an annual student enrolment of 78,000, 10.692 whose first language is Spanish, 1749 Arabic, 999 Kurdish, and more more and more breakdowns until we reach the Burmese and Karens at 169 and Amharic speakers at 154. Think this is a naturally statistical story? Lets go to the local news section whose arresting headline is “Execution Drug Options Limited”. Here we learn that Tennessee has 86 inmates on death row but only enough drugs to execute 8 of them. The pre-fatal injection anesthetic sodium thiopental is not now made in the US, so State governments are having to use veterinary anesthetics or buy the drug covertly in Europe – a dealer based in a British driving school offices in London is intriguingly mentioned in this connection…

But I am getting carried away. The point is that the core “Facts” of the narrative in these stories is based on the figures, and that is where Narrative Science (http://www.narrativescience.com/) comes in. As I was writing my first story this company announced a $6 million funding round led by Battery Ventures. The company was founded by a group whose experience includes Google, Doubleclick, and computing and journalism at Northeastern at Evanston, Ill. Their idea is to take all those fact-based stories and turn the facts into computer-based narrative, create templates around their recurrence and generate a new story with each update. Employment statistics, oil production, share price movements, population change etc etc. We are constantly comparing this quarter to last, or to the same last year, or to the best or worst in the last 5, 10 or 50 years. Where these are recurrent interests a computer can write them very effectively – and, a cynic would say, is more likely to report them accurately.

And the implications of this are immense, and were brought home to me by a casual conversation last week with the digital director of a leading B2B player. He is a Narrative Science triallist and his service is due to be launched during February. He noted both the very rapid need for updates in terms of market stats in his sector, as well as the fact that standard conventions around comparisons meant that these stories were ideal for computerized updates. These too were stories that needed to be squirted quickly onto mobile platforms – comment could follow later once everyone had the core narrative. He then alluded to the cost savings and the annual cost of journalists.  I walked away with the idea in mind that the critical path to saving B2B as advertising fails to return will be a massive change in the cost base of the industry. Ironically, efforts to create a new computerized journalism at Northeastern may well end in the employment of less journalists, though those who are needed will be needed at a much higher level of intellectual input.

Finally, a footnote on the conference. My panel of B2B players were all stars (Mason Slaine, Clare Hart and Scott Schulman) but outside of them I was very taken with David Liu, CEO of the Knot and the two founders of Gilt Groupe: B2C is certainly coming into its own. But the session that made me most thoughtful was an Interview with David Levin, the CEO of UBM. His intellectually rigorous approach to a careful acquisition and disposal programme was very admirable. But is the old niche-based B2B model still available? I see Thomson Reuters creating an increasingly cross-sectoral approach as they build bridges between legal, tax and regulatory on the one hand and financial services on the other. Instead of unrelated niches are we going back to cross-selling related sectors to get growth leverage? And if we are is the Informa/UBM/EMAP model beginning to creak as these players have too little in any one niche to effectively cross-sell? Depends how you define sector and niche, of course, but we could be in line for another age of Happy Families card game swaps, aka vertical sector consolidation.

« go backkeep looking »