May
18
Facing up to Father: The pleasures and pains of a Cotswold childhood
Filed Under Uncategorized | 1 Comment
New book by David Worlock. Pre-order now at Marble Hill Publishers or Amazon.
A small Cotswold farm is the setting for a classic struggle of wills. Robert Worlock, eccentric and demanding, resolutely maintains the old ways, determined above all to make his son into a farmer fit to take over the family acres. His son, David, is equally determined not to be bullied into something he neither wants nor likes. His childhood becomes a battleground: can he find a way to make his father love him without denying his right to determine his own life?
Oct
1
AI and scholarly publishing: unfashionable glimpses of hope
Filed Under Uncategorized | Leave a Comment
We were sitting having dinner under the awning of a restaurant in the Place d’Armes, the main square of Luxembourg. Charles Clark, copyright advisor to the Publishers Association was rehearsing the arguments which he and I, as delegates to the European Commission DGXIII Legal Observatory were to make the next day at a meeting in the Batiment Jean Monet. I had just asked what the role of software was going to be in protecting digital copyrights, the waiter had topped up the red wine in our glasses, and the author of Clark on Copyright fell silent for a moment. then the great man declaimed “Maybe in fact the only way that we can regulate a technology is through the use of that technology… The answer to the machine lies in the machine!”
I have always been immensely proud that I was sitting at the table where this edict was first pronounced, at least five years before Charles published a book under this title in 2005. I must confess that I thought it was just a conversational flourish until I heard him use it in the meeting next day, and then in many meetings in the days thereafter. It remains however a notion of real value and power, and it applies much more widely than simply to the notification and protection of copyright, although it remains important there. I had been telling Charles about the ability of my EUROLEX database search software to find keywords in complex text, and the way in which we marked up legal documents with metadata that enabled the software to see how they related to each other and which one had come first. His response is particularly telling at this time, as we are engulfed in a new wave of pessimism about the potentially disruptive effects of a “new” technology. How like the early days of the dotcom boom in the late 1990s are these early years of the understanding of the impact of AI. Unreasonable optimism, stock market hype, lack of political leadership and direction on regulation, media pessimism about the end of all things familiar and a general consensus that this means the end of humans society as we have previously known it on Earth!
Just as the hype is unreasonable, so the pessimism is equally overdone. Perhaps we all need to be more aware of the 50 years or so developmental work which lie behind the current state of what we call artificial intelligence. perhaps we all need to be more aware of the serious homes that could take place, and the developmental track that we need to observe before we get to them. And I think that Giles would say that before we conclude that our own jobs are about to be automated, wilt to look at the problems caused by the machine which can be ameliorated by the machine.
If you are working in scholarly communications, for example, and you do feel pessimistic about the future, then the experience that I had in the middle of last month in Manchester would’ have been a useful tonic. Receiving an award from the UK professional body of scholarly publishing,ALPSP, was a huge and deeply gratifying honour for me personally, but as I looked out over a Conference crowd of 300 people, I also had to reflect upon the dedication and ingenuity in the body of the hall. Later on, Adam Day was honoured for his work at Clear Skies( Papermill Alarm). ( https://clear-skies.co.uk ). As I listened, I thought about an interview which I recently conducted for the Outsell FutureScapes video blog series, when I spoke to Elliot Lumb and Tiago Barros about their work at Research-Signals.com. we know that we have real problems with research integrity: we also know that we have some really clever people developing intelligent solutions. While fresh problems will appear overtime, fresh solutions will as well.
Is anyone under any doubt that we will create fully automated peer review systems which operate more successful than human beings? I have been watching this space since the work of UNSILO in Aarhus almost a decade ago, and I cannot now conceive that we will fail in the search for systems that detect plagiarism, copyright theft or papermill inventions that work at a higher percentage of efficiency than human peer reviewers. While the systems will all require human supervision, audit and checking, they will counter the ability of AI to be misused until we come to a further level of technological development which requires a further wave of watchdog development.
If I am right in this, then surely AI will change the game in every other respect as well. The recent launch by Digital Science of their Papers Pro environment is surely another significant developmental pointer. We are moving pace towards end to end article creation systems. The key question may be a political one: which authority authorises and certifies peer review software on behalf of funders, institutions, and researchers?
In some scientific disciplines, and in some laboratories, the development of the article as a report will become a function of the intelligence in the laboratory network.. In other words, the “article “will be in production from the beginning of the research process and will exist as a series of elements which can be drawn together and updated at will. Then our now elderly attempts at article processing automation – the Scholar One generation – will be replaced by systems which do not just process but actually create most elements of the article. Human intervention in detailing findings and drawing conclusions will of course remain critical: other sections of scientific articles, like methodologies and literature reviews, are already semi or completely automated. Those in the scholarly communications businesses who think that AI is all about data reuse, or, sadly about windfall profits from data sales, have not yet thought through the complete range of potential applications of machine intelligence . Now, surely, is the time, at the dawn of the age of scholarly self publishing, for everyone to think very hard about the role of the journal,
Will the thinking take place which is required across the entire scholarly waterfront in order to find and fund the technologies and the business models which will effectively recast the future for knowledge transfer in our society? Again I find a degree of pessimism that really surprises me. Then I heard from the founders of Scholarly Angels ( https://scholarly-angels.com ) . Seeing experienced entrepreneurs like Andrew Preston, Ben Kaube and Paul Peeters scouting the market for fundable initiatives, and start-ups to incubate is a hugely hopeful sign. Private equity and venture capital will not do this early stage work on its own. And the existing institutions of scholarly communication, when they talk change, too often talk about change as if it is something that happens to everybody else around them, but not to themselves. this may have got them through the “age of digitalisation“: it will not get them through in the age of machine intelligence.
Aug
26
Here is a problem.
A problem that we must tackle now, and quickly, before the prevalent use of AI in education becomes fully established. If we as a society , ourinternational organisations and governments , professional groups and educators individually do not decide on a framework for the positive use of AI in education, then our neglect will build a framework for the misuse of AI in education.
It could go one of two ways. Either AI becomes the greatest gift to learning and literacy, and to skills-based education generally, that we could have ever imagined, increasing the ability of human beings to acquire skills and knowledge and develop their intellectual capacity beyond the scope of our current educational systems Or AI could become an educational straight-jacket more rigid than anything that we have previously thought possible. To put it in Orwellian terms , it could becomea substitute for education itself, condemning children yet unborn to a serfdom derived from a device culture that delivers the “solutions” that those who created the systems thought were appropriate for learners to have. it could become ultimate political and social control.
I started this train of thought at the beginning of the week as I read through the Eighth biennial report published by Scholastic( see link below) , detailing literacy levels and reading skills through a survey of children of different ages and their parents. There has been a gap between this survey and the last because of the pandemic, but Scholastic are to be commended for staying loyal to this vital public duty. As I read this excellent document, I reflected that I started my working life as an educational publisher, and in the decade that I spent developing learning materials for schools, there was the comforting and satisfying feeling that we were working in a world where literacy was inexorably increasing. As the Scholastic survey shows historically, that trend ceased in the 21st century, and in the pandemic literacy levels declined. If they are now stabilising and even picking up slightly, there is no room for complacency. The survey underlines the importance of literacy by linking it to mental health and happiness, and underlining its importance, in communications in general, but particularly in social and family life.
So what, I wondered, can we do to get literacy off its plateau and begin a steady increase again? Obviously AI could be a key factor. I’ve written elsewhere about the potential for AI in providing personalised tutoring and learning journeys for individuals adjusted much more closely to the l appropriate base of learning and level of accomplishment. AI could be instrumental in finding to right style and presentation of content the maximise learning readiness. It could also help teachers to diagnose learning difficulties as well as suggesting ways around those problems. I am an optimist, and I want to believe that AI can help us to create a better world of education where more young people are able to optimise their skills to a greater level and contribute more effectively to the society in which they live.
At this point I turned to the work of the UK National Literacy Trust for further evidence, and found their current survey work on the use of devices (aka smartphones), and their effect on literacy. This makes depressing reading. I include some headline findings at the bottom of this blog , together with a link, but reading their work reminded me that I was so old that I had lived through the Calculator Moment. After my formal education was over, schools in the UK were instructed to forbid the use of electronic calculators in the classroom. Since I had failed mathematics twice and had to retake it, I was personally unmoved. And it does seem strange now, with a calculator function in every smartphone, but outcry and lamentation went up from every parent and teacher in the land. A vital skills base was going to be lost. What will happen when we run out of batteries? Did this mean the death of algebra and geometry as well as basic computational skills? I feel forced to wonder now, as Apple prepares to launch Apple Intelligence, a generalised AI environment for consumers on devices, whether we are at the beginning of a process where the AI in our device defines what we need to read, and then reads it to us, and which understands what we need to write, and writes it for us. I do see the paradox, I do get the irony. I am an old man with impaired vision and I glory in the fact that voice is now the driver of my interaction with machines, while worrying about the idea that voice driven devices using AI may undermine the very skill sets that I most value.
Friends console me. They pointed out that mankind survived the transition from the gearstick to the automatic gearbox. They point to the huge advances made as we moved through robotic process transformation into fully automated AI workflows. But I have lived in countries and at times when there was not enough electricity to go round and the lights went out every day for a number of hours. So I would just like to see us build the skills= based, literacy enhancing AI modelling before we get to a totally device-dependent, skills-denuded world. My late father, a farmer, decided that if I was not going to follow in his footsteps then I needed to enter the world with some basic skills. He taught me to “lay“ a hedge, build a dry stone wall and thatch a roof. I was not so ambitious for my children and grandchildren: I simply deeply desired that they should learn to read and write so that they could share some of the great pleasures that I have derived from those activities. When I use ChatGBT or Perplexity in my daily work then I feel pleasure that I am extending my skills and building on my knowledge. So, please, can we use AI to develop the literacy skills and knowledge of the learners in our schools today? And can we do it before we tell them that they do not need any of those old literacy skills anymore, because all of the answers will always be available on the device (a charged battery and bandwidth availability will always, of course, be available to everyone everywhere!
PS I did ask Perplexity and ChatGBT4 for guidance on this issue,, and they both answered judicially that, on the one hand, good things would happen, and on the other hand, bad things would happen! Just as I feared!
UK National Literacy Trust survey 2024
- Almost 1 in 2 (47.4%) young people said that, when they use AI, they usually add their own thoughts into anything it tells them, while 2 in 5 (39.9%) said they checked outputs from generative AI as they might be wrong.
- However, 1 in 5 (20.9%) said that they usually just copied what generative AI told them and 1 in 5 (20.6%) did not check outputs, suggesting greater support may be needed to ensure this group of young people have the information and skills they need to critically evaluate AI responses.
keep looking »