Jan
16
Method and Madness
Filed Under Blog, Education, eLearning, Industry Analysis, internet, mobile content, news media, Publishing, Uncategorized | Leave a Comment
To the great BETT show in London on Friday, now the largest educational technology show in the world. Packed and lively as ever, and its sisal carpets as tiring on the feet as some mini-Frankfurt. So it was not surprizing that I suddenly decided to sit down on a stand in the Innovation Corridor and listen like a good kid to whatever that stand chose to tell me. That stand was featuring a guest appearance by Jan Webb, and after 20 minutes I was as keenly attentive as any tribal elder of the Lilongwe being addressed by David Livingstone, or some rude Saul on the road up to Damascus from Tarsus. For here, in twenty slides, was a convincing demo of K-6 self made learning, all using software generally and freely available, content supplied by class and teacher, and the whole lot referenced via the Resources section of the TES, for whom, I learnt afterwards, Ms Webb now works. Having gratefully put myself in the hands of Teacher, what did I learn? Simply that there are more than enough free or cheap ways to manipulate content into lesson plans and lessons to revolutionize the primary school curriculum. That while teachers will be providing the pedagogy, learners can explore collaboratively or individually and the toolset provides the spine of the activity. We started by making some posters. http://edu.glogster.com came into its own there, allowing us to integrate text, music and video into our work, and just when I wished we had a wall to put them on, Ms Webb produced www.wallwisher.com for that very purpose. I noticed incidentally that some of these sites are beginning to add their own content for education: look at www.freeeslmaterials.com in conjunction with this poster background site. Want to add some sticky post-its – turn to http://linoit.com. Get the kids collaborating around these activities – you can go to www.stixy.com. But really collaboration is all over the place: Ms Webb pointed to www.twiddla.com for team whiteboarding, as well as www.123.whiteboard.c0m and www.dabbleboard.com. Finally and joyfully, under this tutelage, I have been improving my drawing skills on www.dumpr.net and, very happily, creating my own comics on www.comicmaster.org.uk.
And there are some real lessons in all of this. As a result, and almost freely (dumpr cost me $20) I now appreciate exactly why I have been saying for two years that the school textbook is a dodo. The richness of the tools and the potential in the screen-based learning experience bear real witness to this. Schools themselves can put together effective learning experiences very cheaply both to energize learners in every subject and level, and to support less able or confident teachers. TES Resources has led the way by creating a national signposting system to great teacher-produced lessons, effectively peer-reviewed by teachers. So lets stop producing textbooks, digital or otherwise, and start producing improved learning experiences? Is that the message? Well, in many ways it is. Just as teachers are moving into new roles, so are publishers. The best work that I have seen in education in the last year comes not from the great and the good of textbook publishing in the 1960s, when I practised it myself with more energy than effectiveness, but from services like Alfiesoft (supporting teachers in testing and marking and reporting: www.alfiesoft.com) and innovators like www.rendezvu.com, pushing out the boundaries around testing proficiency in a spoken language.
As I wandered away from the inspiring Jan Webb, a young woman stopped me in the crowded aisles and pressed into my hands a free…. newspaper. I was so shocked that I gulped and grasped it, and then said “thank-you”, before enquiring whether the schoolchildren who were about to receive it free as a result of a special offer would recognize it for what it was. After all most of them come from homes unvisited by such a thing. However, she said helpfully that kids knew they were the things you found in bins outside of petrol stations, so I thought it OK to take a copy of First News home and examine it. It certainly is a tabloid newspaper all right. Very little content and no learning. After Ms Webb I baulked at paying £875 per year for a class set of 32 copies of a non-collaborative, uncreative, non-experience. Then I did a little research. The paper is edited by a former BBC magazine publisher and its Editorial Director is Piers Morgan, erstwhile tabloid editor of the Daily Mirror and now the delight of US chat shows. His dark arts are everywhere evident, from the claim to a million readers every week (small print: Source – First News Readership Survey) to the picture of the Queen, the Union Jack – and David Cameron – on the front page. No ads and no topless girls, however. This whole confection is financed by Steve and Sarah Jane Thomson, who successfully sold their advertising monitoring bureau, Thomson Intermedia, to the eBiquity Group and now run Addictive Interactive, a “bespoke social loyalty platform”.
So how can we blame the textbook publishers for not changing their ways when someone thinks there is still a business selling newspapers to schoolchildren? I don’t think Ms Webb would have one in her classroom – unless the pupils had made it themselves.
Jan
12
Take the Program to the Data
Filed Under B2B, Big Data, Blog, Financial services, healthcare, Industry Analysis, Publishing, Reed Elsevier, Search, semantic web, Uncategorized, Workflow | 1 Comment
Its Big Data week, yet again. In the last two months we have seen all of the dramas and confusions attendant upon emerging markets, yet none of the emerging clarity which one might expect when a total sea change is taking place in the way in which we extract value from data content. Then this week, with all the aplomb of an elephant determined not to be left behind in a world which has apparently decided that the hula hoop is the only route to sanity, Oracle announced its enterprize Big Data solution. Again. Only now it is called the Big Data Appliance. It started shipping on Tuesday. And the world will never be the same again.
At the heart of the Oracle launch is a Hadoop license. This baby elephant lies at the heart of almost everything. The two Hadoop – based commercializations, have both raised finance in the lead-up to 2012: Cloudera ($40m) and Hortonworks ($20m), while other sector players like MapR who also exploit Hadoop found 2011 a really good time to raise money. And this had a radiating effect on the whole data handling sector. Neo 4j, a database technology (NeoTechnology, based in Malmo and Menlo Park) for graph storage and resolution raised $10m in a round led by Fidelity. Meanwhile, Microsoft signed a deal with Horton works, IBM said it would launch Hadoop in the Cloud, EMC (Greenplum) went for MapR, Dell announced a Hadoop-based initiative, and the world waits and wonders what Hewlett Packard will do, now that it has Autonomy for analytics.
So now we have plenty of initiatives, and, as usual, not much idea of who the next generation of users will be. The first generation speak for themselves. We can see the benefits that Facebook derive from being able to used Hadoop-based tools to find connections and meanings in their content that would have been impossible to cost-effectively reveal in a prior age. And the same would be true of such unlikely bedfellows as the Department of Homeland Security, or Walmart, or Sony (think Playstation Network), or the Israeli Defence Force, or the US insurance industry (via Lexis Risk), or Lexis Nexis (who announced a Big Data integration with MarkLogic), let alone the two players who effectively started all this: Yahoo! (Hadoop) and Google (MapReduce). So asking where it goes next is a legitimate question, but one which can only be answered if we accept that the next group of users are never going to recreate the Google server farms in order to break into these advantageous processing environments. The next group of intensive users will have their XML content on MarkLogic, or their graphical data on Neo 4j. They will want to use the US census data remotely (so will contract with Amazon for process time on the Amazon web presence), and will use a large variety of third party content held in similar ways. Some of their own content will still be held locally on MySQL databases – like Facebook – while others will be working in part or fully in the Cloud, and combining that with their own NoSQL applications. But the essential point here is that no one will be building huge data warehousing operations governed by rigid and mechanistic filing structures. Literally, we are increasingly leaving the data where it is, and bringing the analytical software to it, in order to produce results that are independent of any single data source.
And this too produces another sort of revolution. The front door to working in this way is now the organizational software itself. When Lexis Risk announced at the end of last year that they were going to take HPCC open source, a number of critics saw that as turning their back to an exploitation opportunity. Yet it makes very real sense in the context of Oracle, Microsoft and IBM seeking to build their own “solutions”. Some businesses will want to run their own solutions, and will make a choice between open source Hadoop and open source HPCC. Others in systems integration will seek out open source environments to create unique propositions. But since it was always unlikely that Lexis Risk was going to challenge the enterprize software players in their own bailiwick, then open source is a way of getting a following, harvesting vital feedback, and earn not insignificant returns in servicing and upgrading users.
I am also delighted to see that other winners seem likely to be MarkLogic, since I have been proud of working with them and speaking at their meetings for a number of years. For publishers and information providers, it is now clear that XML remains the route forward. But MarkLogic 5 is clearly being positioned as the information service providers socket for plugging into the Big Data environment. Anyone who believes that scientists will NOT want to analyse all data in a segment, or engineers source all relevant briefs with their ancilliary information, or lawyers cross examine all documentation regardless of location, or pharma companies examine research files in the context of contra-indications should stop reading now and take up fishing. My observation is that Big Data is like Due Diligence: once someone does it, even if the first results are not impressive, all competitors have to do it. The risk of not trying to find the indicative answer by the most advanced methods is too great to take.
« go back — keep looking »