Jan
26
Manufacturing/Motoring/Media
Filed Under B2B, Big Data, Blog, Financial services, healthcare, Industry Analysis, internet, Publishing, semantic web, STM, Uncategorized, Workflow | 1 Comment
Here we sit, in a poor benighted island, slowly sinking into economic anonymity, in a great world where economic growth seems to be a property of lands we once called “under-developed”. A worthy come-uppance, and a suitable subject for Davos this week. Yet, as a persistent optimist, I somehow glimpse a glowing future for my children’s children. Information services and solutions lie close to the heart of developmental growth, and I have written here repeatedly (too often for some readers!) about the necessary connection between injecting data/content into workflow and the regeneration of a post-industrial economy. For some reason the information industry has its eyes fixed on pure information usage (sometimes called “research”). In some areas, though – credit rating, risk management, automated financial trading systems, scientific research – we have come out of the bunker and begun to look at the way applied intelligence, often now derived from Big Data and analytics, can change the way that we view the operational logic of whole sectors of commercial and industrial life.
Now, lets pull back a step further and see how information services change networked industry and society at large. I only have space for two examples. The first was driven home to me on Monday at a dinner given by the Real Time Club. The speaker, Dr Siavash Mahdavi (http://en.wikipedia.org/wiki/Siavash_Haroun_Mahdavi), spoke on 3D printing, and by the time he had finished, and we had examined printed hip joints and shoe inserts amongst other examples the penny was beginning to drop for me. We are moving in the network from manufacturing by extrusion processes through moulds, the industrial revolution pre-digital world, to additive manufacturing, creating products in software and instructing printing devices to build them in extremely thin 2D layers one on top of the other until the desired shapes and structures are created. Medical implants have had the publicity here, but gold jewellery was mentioned as an application. This is a design – intensive, network efficient manufacturing world in which design and the actual printer can be in totally different places. Printing can take place using any materials which can be chemically – adapted to the process. Customization (the running shoe insert designed for the imprint and weight distribution of your own foot) and personalisation are at the centre of this. Every product can be made for you. However, it remains a requirement that everything we know about the performance, qualities and expectations of an artificial hip are brought to bear in the network upon the design process, as the information services world creates the bullets for manufacturing workflow to fire. And all this is going strong now: the lead engineering player in 3D printing in the UK is Renishaw (http://www.renishaw.com/en/additive-manufacturing-news–15505) (and with eery coincidence it announced today a strong trading year, with sales up 11%).
If this is not bizarre enough, I stumbled upon a Google story this week about automated motoring. Apparently Google’s own patented technology had racked up 200,000 autonomous driverless miles by the end of last year. This may just be another Google enthusiasm which runs out of steam, but it does have a history (http://en.wikipedia.org/wiki/Autonomous_car), and a great deal of real research, and my bet is that it will happen in this over-crowded isle a lot quicker than the UK estimate of 2056. Extending the network to our over-populated motorways may be the only way to squeeze more capacity from infrastructure we do not have the space to rebuild, and to control scarce parking resource. Driving my car to the motorway and then surrendering control to a system that governs inter-car distance and speed until I leave is a likely first stage. And as the car becomes part of the network, then its ability to intelligently appraise where it is, where it is going and how it is feeling becomes a natural extension of a world of autos which are already computers on wheels. Information service solutions will be vital to feed this activity: important players like ITOWorld (www.itoworld.com) already assemble critical geospatial data, matched at the vital micro level by services like Elgin (http://www.elgin.gov.uk/) who can tell you about every road repair in Britain. At the moment this is part of the world of local government and planning: tomorrow it will have to be part of the knowledge base of your motor car.
When I think about examples like these I become more and more convinced that the new world of information service knowledge and intelligence will be more important than the old one, patrolled by intermediaries like librarians, and governed by quite irrelevant business models like advertising. And here is a world where the use shapes the content, and where suppliers are involved in developing solutions for sectors or even individual companies. Here the information services and solutions players have forgotten whether they are “content” or “software” players, because it has no bearing on the end result, and they had to have both elements to play in any case.
So who will do this stuff well? Undoubtedly the Indians and the Chinese and the Brazilians amongst others. But in many ways this future vision levels out a lot of the inequalities of the old and new worlds. You do not need a great deal of cheap labour to compete here. Capital too will have a different importance if you can custom manufacture close to the point of use, and avoid shipping and warehousing. I quite fancy the chances of this old island: good with design, strong start-up culture, great software development skills, good financial services investment culture, strong presence in information and education markets globally. Or at least I would, if our politicians did not think that modernity was returning to the railway investment mode of Britain in the 1840s, or aping the French and the Japanese high speed trains of the 1960s and 1970s. The infrastructure requirement here would be to create the most intensive high bandwidth broadband coverage in Europe. Fat chance of that while politicians think there are more votes to be got by shaving 34 minutes off the journey time from London to Birmingham!
Some of my friends call this type of article “futuristic madness” (and that was the polite one!). But, to me, the real madness lies in taking the formats of the Gutenberg age (books, newspapers etc), carefully wrapping them in software and delivering them in facsimile form across the network – and then calling these eFormats Innovation!
Jan
18
Workflow from the Bottom Up
Filed Under Big Data, Blog, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, social media, STM, Uncategorized, Workflow | 5 Comments
Trends and trending analysis are one thing, making an impact on the way people work is often quite another. So while I respectfully write up the huge progress being made to provide large scale tools for analytical discovery in unimaginable quantities of data, a small portion of me remains skeptical about the impact of these developments in the short term on the working lives of professionals. Look at researchers in science and technology: you can readily imagine the impact of Big Data on Big Pharma, but can you so easily imagine what this will mean in materials science? Or can you see how the workbench performance of the individual researcher in neuroscience might be impacted? Its tough, and because it is tough we go back to saying that the traditional knowledge components will last the course. So if you have a good library, access to a reasonable collection of journals and the ability to network with colleagues then that is enough. Or Good Enough, as we keep saying.
So when I read the words “This is important not only for the supplementary data accompanying one’s experiment, but even negative results” I came alive immediately and read consciously what I had hitherto skipped. You see, in all the years that I have spoken with and interviewed researchers, when we get off the formal ground of OA or conventionally published articles, or the iniquities of publishers and the inadequacy of librarians, we get back to some stubborn issues that cling to the bottom of the bucket. One is what do you do with the remaining content derived from the research process which did not get into the article, where it was summarized and where conclusions were drawn from it. I mean the statistical findings, the raw computations. the observations and logs, the audio and video diaries, the discarded hypotheses etc. Vital stuff, if anyone is going to walk that way again. Even more vital is the detritus of failure: the experiment which never made a paper since it demonstrated what we already know, or where the model proved inadequate to demonstrate what we sought to show. Researchers going back to find why a generation of research went astray from a finding that proved fallible often need this content: in terms of detective fiction it is the cold case evidence. Yet more often than not it is not available.
So here is what I found in the nearly discarded press release. Nature Publishing’s Digital Science company (yes, them again!) have refinanced figshare (http://figshare.com) and yesterday they relaunched it. What does it do? It archives all the stuff I have been talking about, providing a Cloud environment with unlimited public public storage. They call it “a community-based open data platform for scientific research”. I call it a wonderful way of embedding research workflow into a researchable storage environment that eventually becomes a search magnet for researchers wanting to check the past for surprising correlations. At the moment it is just a utility, a safe place to put things. But if I just add a copy of the article itself then it becomes a record of a research process. Put hundreds of thousands of those together and then you have a Big Data playground. Use intelligent analytics and new insights can be derived, and science moves forward on the tessellate of previous experimentation – only quicker, with less effort and more productivity for the researcher. And much less is lost, including the evidence from the wrong turnings that turned out to be right turnings. (http://digital-science.com/press-releases/)
So will there be 20 of these? Well, there may be two, but if figshare gets an early lead perhaps there will only be one. After all , the reason researchers would come to value this storage would be having their content in close proximity to others in their field. And while early progress is likely to run quick in Life Sciences, this application has relevance in every field of study. And it also calls into question ideas of what “publishing” actually is. By storing and making available these data, are figshare “publishing” them. They are certainly not editing or curating them. Network access alters many things and here, once again, it catches publishing on the hop. If traditional publishers confine themselves to making margins solely from the first appearance of an article then traditional publishing in this sector is in severe difficulty, whatever happens to the Open Access debate. Elsevier and Nature clearly get it: go upstream in value terms or drown in commoditized content where you are. But does anyone else see it? And why not?
« go back — keep looking »