When I was forced to temporarily cease blogging a few years ago, (see personal note below) AI was a fact of life. Every year we saw improvements in the use of increasingly sophisticated algorithms. We noted the rise and rise of robotic process automation. Those of us with two decades of industrial memories recalled expert systems and neural networks. Those of us with four decades remembered hearing Marvin Minsky at MIT, telling us that he wanted the books in our libraries to speak to each other, to exchange and update knowledge and to build a new knowledge out of that exchange. Yet nothing here prepared us for 2023.
When historians get last year into some perspective they will probably conclude that what happened owed as much to the content creation requirements of online advertising, or the financial services requirement for a new wave of Silicon Valley investment frenzy as it did to a breakthrough in AI capabilities. yet what actually happened last year, even without such a perspective, is truly amazing. The year installed AI as a key strategic component in any strategic planning exercise in almost any commercial activity. Hyper-investment and hyperactivity resulting from it produced tools in generative AI which, a mere year later had immensely more powerful. Compare Chat GPT 3 To the current iteration of Gemini: a context window of 122,000 tokens to one of 1 million. Then look at the public recognition factor, and you find a world in which there is now a normal expectation that machine intelligence and machine interruptibility will be a part of everyone’s every day life. It is as if a switch has been flicked on, illuminating a new room into which we have walked for the first time. We all of us know that we can never now go back through that door or switch off that light. Pandora’s Law.
And we should not want to go back, either. What has happened should simply remind us that change does not happen evenly, and that the realisation of change sometimes takes longer to happen than we anticipate. But in 2023 I detected something else as well. A fear of change that was a little beyond normal anxiety. In the world in which I have worked for over 50 years the idea that content creation through the exercise of machine intelligence could be more threatening than beneficial gained a powerful currency and soon turned into dystopian editorials in both trade and consumer media. As a result we have come out of 2023, the year of AI megahype, with both an enhanced view of the speed and power with which machine intelligence will help, support, and change our society, and a hysterical fear of evils unknown which may result from quantum computers secretly plotting our downfall on the network. Since the invention of the wheel mankind has been learning to accommodate and live with the machine, and we shall surely do so in the world of AI as well. Yet, in the clan to which I belong, the data, services and solutions vendors who called themselves content companies and information providers a few years back (and then before that used to describe themselves in Gutenberg terms as publishers), there has been fear of a different sort. Whether it meant anything or not, they have always embraced the consolation of copyright, the belief that intellectual property can be described and identified and protected, as one of the bulwarks of their commercial viability. The idea that individual creativity could be mirrored by machine intelligence or that the machine might regurgitate, as a whole loan part, content acquired as part of training data, or that the value of content or data once described as “proprietary” could be lost in the machine intelligence age: these ideas are the very stuff of panic. Then add to them the knowledge that machine intelligence can produce “hallucinations “, that some related answers may not always be accurate and correct, and that the long-held belief that machines loaded with garbage do indeed produce rubbish, and we find integrity fears added alongside fear of theft and diminishing valuations.
One of my mentors of many years ago, recommending me to a potential client, commented that “while generally sound on strategy he can be unreliable on copyright “. I have over the years tried to be better behaved, but it is difficult because it takes so long to bring the heavy guns of copyright law to bear on problems that have usually departed long before adequate legislation is available to control them. Early regulation on AI, like the EU AI Act, seems , in any case, more bent on risk control than anything else.
While the Copyright lawyers are anxiously seeking reregulation for a machine age, I for one would take the arguments much more seriously if copyright holders paid real attention to marking their works with appropriate metadata and PIDs that indicated ownership and provenance. It is hard to imagine machine interoperable checking on the copyright status of works if those same works are not identified in ways that machines can recognise and understand. Then it becomes more possible to put pressure on AI developers to ensure that they licensed the genuine article, recognised the credentials of the real thing publicly, and increased the integrity of there solutions by showing users that only the real thing was used in the construction of the outcomes desired. This is beginning to happen in some encouraging ways: the fact that both Google and Open AI now accept C2 PA, the coding system developed for images and videos, shows what can be done by persuading people that being licit and responsible is good for business. Rather than have “fake“ hung round their necks, it is better to say that you will check and code every image that you use , especially in an American election year. In text and data there are similar emerging conventions. The ISCC– international standard content code – is now a draft ISO standard. The long- established GO FAIR provisions of the FAIR Foundation create metadata standards that render data “findable accessible Inter operable and reproducible “. Data and content owners who make it clear to interested parties and machines what the scope and ownership of their asset entails have a much better chance of working successfully with it in this New World. And in particular, they have a better chance of entering into proper andsatisfactory licensing agreements around it. If we are able to persuade the machine intelligence world that integrity is vital to business success, then we have a far better chance of creating the sort of licensing environments that pioneers like the Copyright Clearance Centre have advocated and piloted for years. Businesses in the network have to make for themselves the business conditions that work in the network.
So who will police and patrol all of this until law andregulation finally catches up, if it ever does? The publisher and copyright lawyer, Charles Clark, my fellow delegate to the European Commission Legal Information Observatory, invented the maxim “the answer to the machine lies in the machine”. It was never better applied than at this point. If you want to find bias in machine intelligence then the simplest way to do so is programmatically. If you wish to know whether training data has been derived from legitimate known sources that will vouch for accuracy and currency, ask the machine to interrogate the machine. For the AI companies, the price of reputation may be breaking open the black box and demonstrating good practice in creating answers from the very best inputs.
PERSONAL NOTE : I maintained this blog continuously from 2009 to 2021. I suffered eyesight problems which have left me with some 40% of my vision. My road back to this form of communication has taken three years, during which I’ve had the huge pleasure of writing two books, drafting a third and eventually returning to blogging. Writing in the world of text to speech and speech to text software is different. As I say on the end of all of my communications at work “ if you find errors of syntax, grammar or spelling in what I’ve written, please remember that it is much harder for me to edit than ever before, so try to smile indulgently. On the other hand, if you think that I have written utter gibberish, please contact me immediately!“
]]>“The evolving role of DATA in the AI era “
18 September 2023 Leiden
“If we regulate AI and get it wrong, then the future of AI belongs to the Chinese“. When you hear a really challenging statement within five minutes of getting through the door, then you know that, in terms of conferences and seminars, you are in the right place at the right time. The seminar leaders, supported by the remarkable range of expertise displayed by the speakers, provided a small group with wide data experience with exactly the antidote needed to the last nine months of generative AI hype: a cold, clean, refreshing glass of reality. It was time to stop reading press releases and start thinking for ourselves.
FAIR’s leadership, committed to a world where DATA is findable, accessible, interoperable, and reusable, began the debate at the requisite point. While it is satisfying that 40% of scientists know about FAIR and what it stands for, why is it that when we communicate the findings of science and the claims and assertions which result from experimentation, we produce old style narratives for human consumption rather than, as a priority, creating data in formats and structures which machines can use, communicate and with which they can interact. After all, we are long past the point where human beings could master the daily flows of new information in most research domains: only in a machine intelligence world can we hope to deploy what we know is known in order to create new levels of insight and value.
So do we need to reinvent publishing? The mood in the room was much more in favourof enabling publishers and researchers to live and work in a world where the vital elements of the data that they handled was machine actionable. Discussion of the FAIR enabling resources and of FAIR Digital Objects gave substance to this. The emphasis was on accountability and consistency in a world where the data stays where it is, and we use it by visiting it. Consistency and standardisation therefore become important if we are not to find a silo with the door locked when we arrive. It was important then to think about DATA being FAIR “by design“ and think of FAIRificationas a normal workflow process.
If we imagine that by enabling better machine to machine communication with more consistency then we will improve AI accuracy and derive benefits in cost and time terms then we are probably right. If we think that we are going to reduce mistakes and errors, or eliminate “hallucinations“when we need to be careful. Some hallucinations at least might well be machine to machine communications that we, as humans, do not understand very well! By this time, we were in the midst of discussion on augmenting our knowledge transfer communication processes, not by a new style of publishing, but by what the FAIR team termed “nano publishing“. Isolating claims and assertions, and enabling them to be uniquely identified and coded as triples offered huge advantages. These did not end with the ability of knowledge graphs to collate and compare claims. This form of communication had built in indicators of provenance which could be readily machine assessed. And there was the potential to add indicators which could be used by researchers to demonstrate their confidence in individual findings. The room was plainly fascinated by the way in which the early work of Tobias Kuhn and his colleagues was developed by Erik Shultes, who effectively outlined it here, and the GO FAIR team. Some of us even speculated that we were looking at the future of peer review!
Despite the speculative temptations, the thinking in the room remained very practical. How did you ensure that machine interoperability was built in from the beginning of communication processing? FAIR were experimenting with Editorial Manager, seeking to implant nanopublishingwithin existing manuscript processing workflows. Others believed we needed to go down another layer. Persuade SAP to incorporate it (not huge optimism there)?Incorporate it into the electronic lab notebook? FAIR should not be an overt process, but a protocol as embedded and unconsidered and invisible as TCP-IP. The debate in the room about how best to embed change was intense, although agreement on the necessity of doing so was unanimous.
The last section of the day looked closely at the value, and the ROI, of FAIR. Martin Romacker (Roche) and Jane Lomax (SciBite) clearly had little difficulty, pointing to benefits in cost and time, as well as in a wide range of other factors. in a world where the meaning as well as the acceptance of scientific findings can change over time, certainty in terms of identity, provenance, versioning and relationships became foundation requirements for everyone working with DATA in science. Calling machines intelligent and then not talking to them intelligently in a language that they could understand was not acceptable, and the resolve in the room at the end of the day was palpable. If the AI era is to deliver its benefits, then improving the human to machine interface in order to enable the machine to machine interface was the vital priority. And did we resolve the AI regulatory issues as well? Perhaps not: maybe we need another form to do that!
The forum benefited hugely from the quality of its leadership, provided by Tracey Armstrong (CCC) and Barend Mons (GO FAIR). Apart from speakers mentioned above, valuable contributions were made by Babis Marmanis (CCC), Lars Jensen (NFF centre for protein research) and Lauren Tulloch (CCC).
]]>There is a place in the recent Star documentary “Dopesick” where the investigators seeking to validate the claim of the manufacturers of OxyContin that it is non-addictive, search for the article which is said to be the source of the claim in the NEJM. They cannot find it in the year cited.
What is more, they cannot find it in the previous or subsequent years. This being the first decade of the century, they resort to keyword searching. Eventually they find the source which is said to validate this claim. It turns out to be a five-line letter in the correspondence columns, but the screen shows the thousands of references raised by the keyword searching and a long and harrowing task for the researchers. As I watched it, I reflected how far the world had moved on from those haphazard searching days, along with the realisation that we have not progressed quite as far as we sometimes think, as our world of information connectivity keeps having to track back before it goes forward.
The great age of keyword searching was succeeded in the following decade by the great age of AI and machine learning hype. We welcomed the world of unstructured data in which we could find everything because of the power and majesty of keyword searching and because of our effectiveness in teaching machines to become researchers. And we will never know how much we spent or how many blind alleys we went up as we tried to apply prematurely systems that only worked when we built in enough information to enable us to be categorical and certain about the results that we achieved. And in science, and in health science in particular, categorical certainty is what we must achieve at a bare minimum.
We will never know cost in time and money terms because it is not in the interests of anybody to tell us. But we can see, as we move forward again, that there is a wide recognition now that fully informative metadata has to be in place, that identifiers have to point to a knowledge container in order to point machine intelligence in the right direction. All the work which we did in earlier decades on taxonomies, ontologies and metadata was not wasted, but simply became the foundation of the intelligent and machine interoperable data society which we seek now to build. We are developing a clear understanding that the route to the knowledge we seek lies in the quality of meta data, and the ability to rely upon primary data which is fully revealed in the meta data. Many of us now accept that primary data (by which I mean research articles and evidential data sets) will exist in future for most researchers as background: most of their research will take place in the meta data, and they will dip into the primary data only on rare occasions. There is a clear parallel here to other sectors, like the law, where the concepts and commentary become more important than the words of enactment.
And so as the age of reading research articles comes to an end, so the business of understanding what they mean has to take further strides forward. This means that existing players in the data marketplace have to reposition, and this week’s announcements from CCC exemplify that repositioning in a very clear manner. CCC is an extremely valuable component part of scholarly communications: its history in the protection of copyright and the development of licensing has been vital to helping data move to the right places at the right times. But CCC management know that that is not any longer quite enough: their role as independent producers of the knowledge artefacts that will make the emerging data marketplaces succeed is now coming to the fore. In two announcements this week they demonstrate that transition. One is the acquisition of Ringgold, an independent player in the PID marketplace. Amongst all meta data types, PID and DOI alongside ORCID, have enabled the scope of the communications database marketplace to emerge.
PID means permanent identification. In the case of Ringgold, it means making sense of organisational chaos thus disambiguating what are the most confusing aspects of the sector. Just as ORCID sought to disambiguate the complex world of authorship, so Ringgold
seeks to help machines reading machine readable data understand the nature and type of organisations and activities that are being described. And then extend this one stage further. It is one thing to know these fixed points of meta data knowledge, quite another to use and manipulate them effectively to create the patterns of connectivity upon which a data driven society depends.
While announcing the acquisition of Ringgold, CCC also announced a huge step forward in the developing world of knowledge graph development. Putting labels on things is one thing: turning them into maps of interconnected knowledge points is another. CCC have been brewing this activity for some time, as their Covid research development work demonstrates, it is impressive to see this being done, and being done by a not-for-profit with a long history of sector service and neutrality to the sector major player competitiveness. We are increasingly aware that some things have to be done for the benefit of the entire sector if the expectations of researchers, and of the sector as a whole are to be achieved.
And of course, those competitive major players are also moving forward. The announcement in the last two weeks by Elsevier that they have made a final bid for the acquisition of Interfolio is a clear indicator. This implies to me a thesis that article production, processing and distribution is no longer in the development marketplace. We are now critically aware that the business of producing, editing and presenting research articles may be fully automated within the next decade. With even more development work going into the automation of peer review checking, human intervention in the cycle of research publication may become supervisory, and the bulk of production work may shift to the computer on the researcher’s desk.
There will still be questions of brand of course. There will still be questions of assessment and valuation of research contributions to be made. And it is this latter area which becomes a major point of interest as Interfolio takes Elsevier into the administration of universities and the administration of funding in a vital interesting way. This is the place where new standards of evaluation will be developed. When the world does let go of the hand of the impact factor which has guided everybody for so long, then Elsevier want to be in the centre of the revaluation. In other words, if Elsevier as market leader formally competed most with fellow journal publishers, its key competitor in future development may well be companies like Clarivate.
The critical path of evaluation will be the claims made by researchers for their work, and the way those claims are validated in subsequent work reproducibility. In this connection we should also note the work of FAR and it’s collaborators in developing ideas around “nano publishing”, using metadata to clearly outline claims made by researchers in articles and both their conclusions parts of articles and in other places as well. If this had been in place, the OxyContin investigators would have found it easy to find a letter back at the beginning of the century.
End
]]>David R Worlock , Chief Research Fellow , Outsell Inc.
Although we are wearing life-jackets , we struggle in the water . The turbulence surrounding climate change and Covid 19 is so great that we are tossed in the wake of theses vessels . In just 24 months , our ideas about the Future , the sunlit uplands of our visions of technology-enhanced work and leisure , the improvement of the human condition , the notions of incremental progress and exponential growth , have been shaken . Suddenly the Future is something endangered , something to be preserved , something to be secured – and sometimes something to be feared . This moment calls for courage and decisiveness . We have to clarify our objectives and build towards our desired outcomes regardless of tradition or orthodoxy . It is too late to say that the water is rising : we are in the water already and floundering . What we did before as researchers , librarians , publishers , intermediaries , is less than relevant to what we do next . What we do next will include things we have never tried before , so we will have to learn quickly and move flexibly .
Metaphors can only be stretched so far . In reality a good taste of the Future is already with us , though , as William Gibson so accurately forecast , it is not evenly distributed. While preparing for this Keynote through the autumn , I learnt at the CERN/University of Geneva discussions on innovation in scholarly communications that some scholars already envisage publishing on low cost open platforms managed and run by researchers and their institutions . Yet at the Frankfurt Book Fair , it was easy to sink back into the atmosphere of a scholarly journals world , post-print , but still adhering to the practises and principles of Henry Oldenburg and the Proceedings of the 1660s . Yet everyone was aware that something had happened . 450,000 Covid and Covid-related articles had been published in the previous 24 months . Everyone had seen submission growth during lockdown . Everyone paid lip service to the idea that something impressively vague – usually “AI” – would get us out of the hole . Everyone , as always in the fragmented workflow model of scholarly communications , wanted only to concentrate narrowly on their own piece of the action , regardless of what was happening elsewhere .
If any of the participants were able to take a holistic view of what is happening to researchers , then I think that these conclusions , amongst others , would offer themselves :
These three facets of our future , all available now within plain sight , argue a certain view of change . Yet the change will not be dictated by publishers or indeed librarians . Their roles will develop and alter as a result of the decisions made in the research community about the future . Indeed , change has already taken place as a result of Open Access . Recent reports indicate that some 33% of articles are now published this way , and the STM report forecasts that in research intensive countries – the UK specifically – that figure will reach 90% by the end of 2022. Many , including myself , think that the average may be higher globally , and close to 50%. OA has taken 20 years to arrive , but it has come because of increasing researcher and research institution approval . Yet for many researchers , asking questions about the way science was communicated was the smallest issue at hand , even if it was the most easily addressed . OA was simply the preface to the book entitled “ Open Science “, in which scientists question and debate every aspect of the process of research and discovery . We should be very glad of this . If science is indeed our only hope of rescue from these storm tossed waters in which we bob helplessly , then the very least that we want is for it to be accurate , ethical , constructively competitive where that helps , completely collaborative where that helps , and squarely based on the evidence available to all . That evidence will be largely data . As Barend Mons , Professor of BioSemantics at Leiden and Director of FAIR says , “ Data is more important than articles “ . And this is where the future begins .
“As most article writing is increasingly done using machine intelligence ….the article can be fragmented and each element published when ready .”
The picture painted so far shows machine intelligence intervening to ameliorate human issues with handling content . The future is about building the structures which will accomplish that . As so often , it is not about inventing something wholly new to do the job . Artificial intelligence has been with us in principle for 60 years , and from the Expert Systems and the Neural Networks of the past 20 years we can produce a mass of practical experience . This is not to say that there are no problems : what we can do depends critically on the quality of our inputs . In many sectors of working life , data bias remains a real problem , and algorithms can as a result inaccurately represent the intentions of their creators . The positive facts are that we can plan to use a whole range of AI-based tools to address our issues . Deep learning , machine learning, the now widespread use of NLP , the increasing effectiveness of semantic-based concept searching and comparison, as well as other forms of intelligent analytics have all been deployed effectively over the past five years and intensively in Covid research . And yet , we have not yet entered the Age of Data in Scholarly communications , despite the daily practices of many researchers .
“ Our sense of priorities is upside down . Data is always more important than articles , and will continue to be in the age of machine intelligence “ .
We cannot seem to break away from the notion that communication is narrative . The journal article , a report on an experiment or a research programme , is in itself a narrative . It is a story told by humans to transfer knowledge and information . This means nothing to machine intelligence . The metadata that guides machine to human communication will be far less effective in promoting machine to machine interoperability. If we want to use this interoperability to , for example , rerun an experiment in simulation or in reality , or find every place where similar results have been recorded using similar methodologies but described in different words or languages , then we need an augmented set of sign posts to shorthand the way that two machines speak to each other . And we need protocols and permissions that license two machines to negotiate data across the fragmented universe of science data , and across its innumerable paywalls .
“Considering that most of the readers of scholarly articles are now machines we should prepare thos articles so that machines can interact with them even more effectively “
FAIR and GO FAIR have made great strides in making this new world possible . There is a role for publishers and librarians in helping to ensure that data is linked to articles , saved in a safe repository , fully accessible and with efforts being made to develop the business models that improve metadata and thus machine to machine exchange .There is an even bigger role to ensure that all parts of the article are fully discoverable as separate research elements through added metadata to support the full interactivity of machine intelligence . It is predictable that in time most readership of articles will be by machine intelligence , and that much of what researchers know about an article will come from the short synopsis or the infograms provided by alerting services or impact and dissemination players , who will have an important role in signposting change and adding metadata ( Cactus Global’s R Discovery and Impact Science , or Grow Kudos , are good examples . ) Researchers will predictably become adept users of annotation systems ( hypothes.is) , writing their thoughts directly onto the data and content-as-data to create collaborative routes to discovery . Wider fields of data will become more routinely available , as DeepSearch9 have demonstrated with the deep web and with medical drug trials. Some researchers will desert long form article writing altogether , preferring to attach results summaries directly onto the data and distinguish them with a DOI , as they have done for many years in cell signalling , and as members of the Alliance of Genomic Resources do in their Model Organism databases . Here again DOIs and metadata connect short reports ( MicroPublishing ) to the data . And , if we follow the excellent development work of Tobias Kuhn , we shall be publishing explicitly for machine understanding ( “ nano-publishing “.)
Of course , we are still a long way away from the prevalence of this very different world of scholarly communications , at least for the generality of researchers . And if this is really the way we are going then we should expect to see some stress points on the way , some indicators that the main structures of the content world of article publishing is beginning to bend and buckle . We should also expect to see the main concerns of Open Science beginning to have an impact . Every observer of these developments will have their own litmus list of indicative changes : here are mine :
1 . Article Fragmentation . Over the last thirty years I have several times acted as a Judge in contests to create The Article of the Future . Some of these , notably the one created by Elsevier , showed huge technical ability . We are now used to the article that contains video , oral interviews and statements , embedded code , graphs that can be re-run with changed axes , and in healthcare ( OpenClinical) embedded mandates that can be carried over into clinical systems . Some of these artefacts need to be searched in a data driven environment if we are to find exactly the moment in the video where certain statements were made , for example. Articles stored in traditional databases are not normally susceptible to this type of enquiry . I expect to see articles appearing in parts across time , linked to the early stage research activity (morrissier.com, and Cassyni) which makes seminars , conference speeches and other material created prior to the termination of a research project , available and accessible as indicators of early stage research results .
The influence of Open Science on the redevelopment of the article will be acute . Pre-registration , the process by which research teams publish their hypothesis and methodology at the very beginning of the research process , is designed to prevent any subtle recalibration of expectations with results in the process of formulating the published report . PLoS has implemented a service that trials this idea. At the same time Open Science demands that the searchable record should give much better coverage of successful reproduction of previously published findings , as well as coverage of failed experimentation and of failure to reproduce previous results . All of this has obvious value in the scientific argument : little of it is in tune with the practises of most journal publishers . I expect to see journal publishing becoming much more like an academic notice board , with linked DOIs and metadata helping researchers to navigate the inception to maturity track of a research programme , as well as all of the third party commentary associated with it.
2 PrePrints and Peer Review . Critics of what is happening currently as scholarly communications gradually eases itself into a born digital framework for the first time, point to the over-production of research and in particular to the rise of the pre-print as proof of too much uncategorised , lightly peer reviewed material in the system of scholarly communication . There are always voices that want to go back the way we came . Others point out that if we can successfully search the deep web – 90% greater than Google – then searching a few preprint servers should not be too much of a challenge , especially if we get DOIs and metadata right first time . And in thinking about this we should factor in the idea that developing the sophistication of our identifiers , increasing the range and quality of metadata applied throughout the workflow of scholarly communication , and extending the reach of semantic enquiry remain bedrock needs if scholarly communication is going to function , let alone become more effective . By the time that these processes reach maturity , we will have long ceased to refer to any of this material as “articles “. We will simply refer to “research objects “ in a context where such an object might be a methodology , a peer review , a literature review, a conference speech , an hypothesis , an evidential dataset or any other discrete item . Progress in this direction will be the way in which we measure the real “digital transformation “ off scholarly communications .
3 When do we do Peer Review ? In 2021 , two of the physicists , both over 80 ,who won the Nobel Prize were distinguished for work accomplished in the 1970s and 1980s . Open Science points out that our current peer review system does not account for changes in appreciation of scholarly results over time . In addition , the current system can shelter orthodoxy from criticism , and in the narrow confines of a small sub-discipline , is open to being ‘gamed ‘, if not corrupted . Many subscription publishers cling to peer review , along with VOR ( Version of Record ) , like a comfort blanket , sensing that this may give them the ‘stickiness’ to remain important in an age of rapid change . It helps that for many publishers peer review is something they organise but do not pay for , leaving an uneasy feeling that it may not survive a reluctance amongst researchers to volunteer ( a shortage is being felt in some disciplines already ) where neither pay nor public recognition is available .
Two factors complicate this issue . One is timing in the publishing process . Do we really need an intensive review at this point? Funders have reviewed the research programme and the appointed team , and will be able to do due diligence against those expectations . The availability of much more information around reproducibility or the lack of it amongst the flow of research objects is important here , but takes time post-publication . The ability of critics and supporters to add commentary within this workflow will become important , providing the critical input of named individuals who are prepared to stand behind their views . The introduction of scoring systems that are able to assess the regard with which a body of work is held , and index changes to that over time will be critical developmental needs . And then the second factor contributes : AI – based analysis has already proved successful in reducing the element of checking and verifying which is part of each peer review . The UNSILO engine , a part of Cactus Global , executes some 25 checks and is widely used to reduce time and effort in publication workflows . As work like this becomes even more sophisticated and intelligent , it will not simply improve the quality of research objects , but will create its own evidential audit trail , reassuring researchers that key elements have been checked and verified .
4 Open Access/Open Platform. The rush to embrace change is so prominent in certain parts of our society that we tend to turn the changed element into the New Orthodoxy well before its maturity date . This is certainly the case with Open Access , when perhaps the question we should be asking is “ How long will Open Access survive ?” OA is a volume based business model . This is important to recall when there is pressure for APCs to reduce , and when Diamond OA becomes a topic of real interest and concern . Diamond OA often relies on the voluntary efforts of researchers and small scholarly societies , and these efforts can prove to be sporadic . Predictably , Open Access will lead to an even greater concentration of resources in a very fragmented industry . While Springer Nature and Elsevier are described as behomeths within scholarly communications, they are far from the size of major media , information or technology players . OA will drive more Hindawis into more Wileys .
Alongside this we must note changes in publishing workflows . As APCs stabilise and tend to decrease, margins will be maintained by the increasing application of RPA , Robotic Process Automation . The technology today which can write a legal contract proves equally adept at reading and correcting a proof , resolving issues in a literature search or creating a citation listing . Yet publishers who today look at process cost reduction as a way of staying in business must also factor in the the elimination of barriers to entry that this involves . We shall reach a point where mass self-publishing of research objects , whether still in articles or not , becomes very feasible . The successors of the Writefulls , the Grammarlys and the WeAreFutureProofs of today will become the desktop tools of the working researcher . And then the F1000s of today , or their ORC derivatives , or the Octopus Project recently funded by UKRI , will assume the status of Open Platforms , the on-ramps to move articles and then research objects into the bitstream of scholarly discussion and evaluation . This too will give an opportunity to address the most glaring omission in today’s scheme of process: the lack of a cohesive dissemination element . The irony here is that , for many participants , getting published means ‘ everyone knows about it ‘. Clearly they do not . Some publishers offer large volumes of searchable content behind paywalls , and the whole sector talks learnedly about “ discoverability “. Why , in the age of knowledge graphs and low/no cost communications , a publisher would not feel able to alert every researcher in a given sector to the appearance of fresh materials linked to their research interests, is a mystery . The gap has been partially filled by social media players like ResearchGate , but as long as the social media remains advertising based some researchers will reject this . Players like ResearchSquare, Cactus RDiscovery and Impact Science, and Grow Kudos all address these issues in various ways , but gaining impact from meaningful dissemination remains a blind spot for many publishers .
5 Metrics It is obvious enough that new systems of metrics will grow out of the evaluated bitstream of scholarly communication . While citation indexation fades for some , it does not go away . Using altmetrics to create new measures , like Dimensions , provides a welcome variation , but is still far from being a standard. If it looked at one point that Clarivate was going to revive ISI to recreate the Impact Factor , then it has also looked in recent years as if Open Science advocates have set their faces against the impact factor as a indexation that can be so easily and obviously gamed . There is then a vacuum at the heart of the digital transformation of scholarly communications : we still do not know how to rank and index contributions so that searchers can see at a glance how colleagues rate and value each other’s work . When we do – and I have jumped the gun by naming it “Triple I “ already , for the Index of Intervention and Influence , it will capture and evaluate every network participation , from grant application to pre-registration intent to early stage poster and seminar and conference contributions , to blogs and reviews and on to the researcher’s own results and their reception and evaluation . Here at last the distortion of the pressure to Publish or Perish “ will be laid to rest .
CONCLUSION. I have tried to describe here a world of scholarly communications in motion . We need to watch very carefully in the next few years if we are to validate the direction and judge the speed . As we move into 2022 , the way in which so called “transformative agreements “ are renewed or replaced will offer up plenty of clues . We need to validate experimentation in forms of communication , both long and short term . While many publishers assert that authors will not accept that data leading to reproducibility should be made available, PLoS have maintained one service in which data is linked by reference to the article after being placed in a safe data repository like Dryad or Figshare . They report no resistance to these requests . Unless we all experiment we will never know.
The approaches made by Open Science as a generalised movement for change and reform will be critical , as will the speed and completeness with which these ideas are accepted and implemented , especially by funders . The issues here will be both big and small . Retractions , the way they are notified and the way in which the discovery of retracted material is flagged to users , is a finite area that has required reform for many years . On the other hand , the moves in several countries and many institutions to de-couple article and book production from promotion or preferment in academic institutions has wide implications . Remove “publish or perish “ and one of the main supports of the publishing status quo goes with it . It will not stop researchers being measured on the quality or impact of their contributions to scholarly communications , but it may well be that those contributions can be just one element of a multi-faceted rating .
Data and AI will continue to be central to the possible directions of change . Just as SciHub challenged the paywalls of the industry half a generation ago , s the announcement of the launch of The General Index in October marks a critical moment for researchers . There are alternative means of knowledge access and evaluation . There is nothing illegal about Carl Malamud’s enterprise , but using text and data mining techniques to create an index of terms and five word expressions of concepts in 107 million scholarly articles – just the beginning says the team – and making it free to use and downloadable is a huge achievement . It means that the age of going to the source document , the version of record , recedes even further from the researcher’s priorities except as a last resort or if the wording was of critical importance . For those who have long held the view that most research in the literature would eventually be done only in the metadata , this is an early dawn .
Some will read what I have said and conclude that this is just another “ the end of publishing “ talk . This would be wrong . I want to reach out to the hundreds and thousands of data scientists , software engineers and architects who have joined what were once traditional publishing houses in the last decade. You have a key role and a huge opportunity as the digital transformation of scholarly communications at last gets underway. The data analytics , the RPA systems , the dissemination environments , the new services summoned up by the Open Science vision – all of these and many more provide opportunities to reboot a service industry and create the support services that researchers need and value .
USEFUL REFERENCES
AI enabled scholarly workflow tools and other support services :
Scholarcy.com
Scite https://scite.ai
Cassyni.com
UNSILO. https://unsilo.ai/about-us/
Barend Mons. Seven Sins of Open Science. ( slide set ) https://d1rkab7tlqy5f1.cloudfront.net/Library/Themaportalen/Open.tudelft.nl/News%20%26%20Stories/2018/Open%20Science%20symposium/Spreker%204%20open-science-Barend%20Mons_web.pdf
Open Science. The Eight Pillars of Open Science. UCL London https://www.ucl.ac.uk/library/research-support/open-science/8-pillars-open-science
]]>Science advances by virtue of standing on the shoulders of giants , but sometimes you need a stepladder. Longtime public access activist Carl Malamud believes he is providing one in his newly launched ( 7 October ) General Index , a way of filleting scientific knowledge and spitting out the essential bones which may yet rival SciHub , the Azerbaijan-based pirate site of full text science articles , as the no-cost way to search scientific literature without paying publishers for the privilege . In a world of pinched science budgets this may be appealing . Even more appealing may be the thought of getting to the essence without full text searching and the elimination of false leads and extraneous content.
It used to be a joke that one day the metadata around science research articles would be so good that you could pursue most searches through the metadata without troubling yourself with the text of the article . Indeed , in some fields , like legal information , the full text of cases could be a nuisance and concordances , citation indexes and other analytical tools could be used to get quickly to the nub of the question . Today these are built into the search mechanism and the search strategy . Mr Malamud has a long history in public and legal information ( see public.Resource.Org , his not for profit foundation and publishing platform ). At one point he challenged Federal law reporting on cost and campaigned to become U S Printer . But he is a very serious computer scientist and his target now is the siloed , paywalled world of non-Open Access science publishing . And the point of attack is both shrewd and powerful .
The weakness of the publishers is that their paywalled information cannot be searched externally in aggregate in a single , comprehensive sweep . Just like SciHub , Mr Malamud enables “ global” searching to take place . He has built an index . Currently he covers 107 million articles in all of the major paywalled journals . He has indexed n-grams – single words and words in groups of 2, 3, 4, and 5 . He has built metadata , IDs and references to the journals . And , he claims , he has done this without beaching anyone’s copyright . He points out that facts and ideas are not copyright , and that his index entries do not attract copyright since they are too short to be anything else but fair dealing . Publishers will no doubt try to test this legally , probably in the US or UK since common law jurisdictions look more favourably on economic rights . In the meanwhile it is worth pondering the words of part of his publication statement:
“The General Index is non-consumptive, in that the underlying articles are not released, and it is transformative in that the release consists of the extraction of facts that are derived from that underlying corpus. The General Index is available for free download with no restrictions on use. This is an initial release, and the hope is to improve the quality of text extraction, broaden the scope of the underlying corpus, provide more sophisticated metrics associated with terms, and other enhancements.”
It is very clear from this that science publishing , if it attacks the General Index , is going to do so on very tricky grounds . Looking like monopolists is nothing new , but actually persuading researchers that they are instrumental in building reputation and career advancement weakens as an argument when the publisher is being pilloried for restricting access to knowledge . Building a new business in data solutions and analytics is a road that several have taken , but only the largest are very far advanced . This might be a time for the very largest to get together to discuss grouping services for researchers , but free , and without anti-trust implications ? Old style subscription journal publishing is getting boxed into a corner , with Open Platform publishing advancing quickly now , with applications like Aperture Neuro (https://www.ohbmbrainmappingblog.com/blog/aperture-neuro-celebrates-one-year-anniversary-with-new-publishing-platform-and-first-published-research-object ) and work like the Octopus research project at UKRI that I have mentioned previously .
In all of this , Data , the vital evidential output from research and experimentation , remains neglected . Finding a business model for making data available , marked up , richly enhanced with metadata and fully machine to machine interoperable remains a key challenge to everyone in scholarly communications . Even when Mr Malamud’s 5 terabytes of data ( compressed from 38 ) is installed it will only be a front end steering device to guide researchers more quickly to their core concerns – and those will eventually result in looking at the underlying data rather than the article .
The references below include a Nature article with the only comment from a major publisher that I have seen so far . I wonder if they are talking about it in Frankfurt!
https://www.nature.com/articles/d41586-021-02895-8
<
]]>We need to talk seriously about Futures Literacy . And we need to do it now , before it is too late . The decisions being taken in our boardrooms are getting bigger and bigger . And if they are not , then we should be very worried indeed . This month we come to CoP 26 , exposing once again the need to take urgent steps to address climate change . The Board cannot simply leave all of this to the politicians , who will always be guided by what will give them electability. The decisions on climate , upon investment in change and most of all on speed of deployment , will be critical in meeting targets and , eventually , in escaping the worst effects of hundreds of years of exploitation and neglect . Yet for many of us , as we steam towards the Metaverse at ever increasing speed , it seems as if we have a parallel set of concerns. We know that we have to think about investing in the technologies that surround information content and data in the information industry . We also know that next year our customers will have different and enhanced expectations of us . We are sophisticated now as businesses , handling online service functions , raising fresh capital and working cohesively with stakeholders . Then why , O why, is the Pygmy in the room the way that we discuss the Future ?
I am now past fifty years of working , as a manager , a Director , a CEO and as an advisor to many boards . My experience of experience is that you do not really learn very much from it in periods of rapid change . When I started it did not matter much if a senior manager could not distinguish Linotype from Monotype . Today it does not matter much if a manager cannot discuss Digital Twins or tell you how a GAN network operates . What concerns me is the nature of the dialogue , the discipline of the approach , the “ empirical rigour “ in the discussion , since these are the necessary supports for planning , and , above all , for planning timing , which are needed if we are to sustain any hope of making sense of what we need to do beyond Q2.
All too often , even at board level , discussion devolves to the anecdotal brilliance of someone’s daughter and the app she found on Google , or the son who downloaded a course and passed Math without needing a tutor , or a visionary who someone has seen speaking on YouTube , or a book which someone had heard of but never actually read … This anecdotalisation of the Future makes me want to scream . I take it that we sit on Boards because we are charged by the stakeholders , beyond our governance duties , with the maintenance and growth of Value through Time . The Future is thus our mandate , not something to obfuscate around . We need to talk frankly about how we anticipate change , and just as we should be watchful now for bias in data , we need to start with a careful self-audit of our own bias about the future .
The most valuable work that I know in this area comes from UNESCO , and from Riel Miller, their head of Futures Literacy . The case he makes is impressive and has the huge merit of moving us away from an extrapolation-based thought process , where we all try to second guess future trends from what we have experienced in our own lives . In the first instance , our own experiences are collected randomly . In the second , this method gives us no way of testing probability or timing . Far better then to try to develop strategies about the Future by creating , or reframing , our thinking through developing hypotheses, altering all of the variables and testing our assumptions . This sounds to me like a managerial version of scientific method , and a discipline devoutly to be wished for when we come to consider the lazy thinking around much of the Futurism that we read and hear . In the information industry , after all , we say that we are driven by data science . Some attempt to think scientifically may well be overdue .
So how do we go about the business of reframing our corporate thinking about the future ? Riel Miller’s suggestion is the Futures Literacy Labs concept , though I would not recommend this in some of our industry corporate frameworks as a board level activity . However , the opportunity to put some senior directors , key managers and some younger fast track recruits into a regular meeting context where a discussion discipline is maintained around forming and testing concepts, could both inform board decision making and spark small scale experimentation to test developed ideation . And this would be especially valuable and useful if the primary concentration was on our users and how they will work . This then forces us to think hard about how we continue to add value for them . It could stop this low level assumptive discussion of generalities – “ Of course , AI is the future of everything “ – and ground our arguments in the vital qualities that they seem to lack – Context and Timing. Above all , it widens the responsibility for the future – this does not rest with the CEO , the CSO or the CTO . It rests with all of us .
]]>An Open Letter to Richard Charkin in response to his column in Publishing Perspectives ( https://publishingperspectives.com/2021/09/richard-charkin-an-heretical-view-of-academic-publishing/)
Dear Richard. You know well the warmth and affection that I feel about your work and for you personally . But just at the moment , having read this piece ( doubtless written to irritate !) I feel like Sancho Panza . I am sitting heavily on my mule behind you , Master . I see you applying the spurs to Rocinante’s lean flanks , I see the direction of your lance , and I must cry out , though in your enthusiasm you will not be able to hear me , “ Those be Windmills , sire”. Sixteen long years have passed since I , as Chief Researcher on the House of Commons enquiry, invited you to give the evidence that you cite here . In that time Open Access has ceased to be an innovation and has become a norm . This is not a battleground any more . In the five day Geneva Workshop on Innovation in Scholarly Communications , organised by CERN and the university of Geneva , and attended by 1400 scholars last week, I heard no voice that even questioned the hegemony of Open Access .
The battle ground is elsewhere . Lets stable Rocinante and give her a good feed of corn and listen to some market voices . Like the ScholarLed consortium , and the COPIM partners , who spoke in Geneva of pooling publishing software solutions online to create infrastructure and scale for scholarly self-publishing Open Access monographs . Or like Knowledge Unlatched in Berlin , using the subscription business model you so love to “Open “ books subscribed by libraries . Or the MicroPublishing work sponsored by CalTech which publishes short evidence -based articles , many by post-grads and early career researchers , which address one of the problems of the day – how do young scientists get recognition and build up a portfolio of work when the great branded journals are barred to them by elitism and economics .
Or we could go and talk about Open Science – really the subject of which OA is but a tiny sub-section . As publishers we always shrank from understanding how scientists worked , but since all the processes of that work are now contained in seamless digital networks we cannot avoid it . The Professor of BioSemantics at the University of Leiden is very clear . He says that the Data is now more important than the Article . His peers elected him President of CODATA, the International standing committee on research data , and he chairs the High Level Expert Group of the European Science Cloud . One of his problems , as he works to proliferate the FAIR protocols and the Global Open FAIR mandates , is that publishers rushed to publish articles but ignored the Data . There is no business model for Data . Yet its metadata and mark-up are urgent publishing problems . In a world where more machines than people are reading both articles and data , it is no good just marking up the narrative bit so a machine can serve it up to a human . Machines do not do narrative . They do RDF . They understand triples . Publishers really do have a long way to go until anyone, man or machine , reading an article can find the evidence and vice versa , and both humans and machines can find and fully interact with both .
And then of course , Open Science would restructure the article . Ethical considerations may yet demand that the hypothesis and the methodology be openly available before experimentation commences . Are publishers generally good , do we think , on the ethical side ? Are retracted articles clearly marked as such in databases so that no one would ever mistake one in a search ? Are articles marked to show where work has been done to reproduce their results , and is that work linked to the original paper ? Publishers really do need to understand how science is changing and work with it to provide the process tools it needs in terms of analytics and discoverability and reproducibility. Shifting to Open Access but postponing the real impact through transitional deals buys time , but that time has to be used to re-invent and re-invest the future . Above all , we need to recognise the scale of what has changed . The 450,000 Covid related research articles of the past two years defy human analysis . There is no time left for a decent tilt at a windmill , dear friend !
]]>Last week’s OAI 12 ,the Geneva Workshop on Innovation in Scholarly Communication, hosted as always by CERN and Geneva University , was a delight . Real scientists talking with real passion . Genuine case studies that underlined some critical issues where science can do better . A good sample of “ citizen science “ involvement to remind us that real people , not just scientists , can perform science as well as experience it and benefit from it . Once again the meeting was truly international and once again , it featured not just the performance of Open Access but the much wider implications , seen through the wider lens of Open Science . And it was immediately clear that Open Science is not just a lens but a prism , and those who look through it experience some very different emotions .
There is , for example , a chasm of intent between those who embraced Open Science as the democratisation of science , and those who worried about the purity of scientific performance . There is now a strong and practical demand to open up a wider understanding in the general public about scientific conclusions and what they mean . This has been given sharper point by the pandemic , but it is worth noting that while professorships in the public understanding of science go back 30 years in many countries , and we have had many very distinguished science journalists , politicians and the press have real ( and sometimes deliberate ) difficulty in explaining what science means – and admitting its limitations . People who rally under this banner tend also to believe that all research funded by the state should have its results published by the state , so that all citizens and taxpayers can have access to it . They are met with voices who hold that too much science is published , too little selectivity is exercised , and too much duplication of identical experimental results is permitted in an Open Access context .
This tribe in turn is confronted by a fervent lobby who believe that the publishing research results is notoriously incomplete . Where , they ask , is the data , evidential or not , that surrounds the scientific process ? Certainly not lodged with the article , too often not even linked to it , too often not even available , just because commercial publishers never found a way to monetise it . And even when it is in a repository or linked to the article , it is often not presented in a way that makes it usable , either to another scientist using different analytics , or to another computer trying to reproduce the experiment . What hope , they then say , for “ Open Science “ when so much science is closed even to the re-use of other scientists ?
Beyond these knowledgeable Geneva conference attendees are the worried ranks of working researchers who have a suspicion that not everyone is following the same basic rules . Does evidence sometimes get distorted to meet the claims of the hypothesis ? Is someone gaming the citations in order to get tenure or preferment ? Is someone distorting what was actually put on record to create panic and discord ? ( It is hard to attend these meetings without being given a case history of anti-vaxxer conspiracy ) . As in any community , rumour takes flight , and while it is impossible to talk about the extent of malpractice , Open Science now also means “ open up science “ and shine a more public light on retractions , on plagiarism , and upon the claims of experimentation that defies reproducibility .
One striking conference session featured the very vigorous crop of small presses developing OA books programmes and sharing infrastructure to do so . ( ScholarLed , COPIM). They may point to the increasingly comprehensive and available workflow software for publishing , which may serve the desired democratisation by enabling every research team to report results and data to open platforms , subject to automated primary peer review , leaving the eventual status of the work in the hands of its readers and users across time . The proponents of ‘too much ‘ will be appalled , but this has been the drift since Open Access itself began – a fee-based business fuelled by APCs can only be a volume business .And if the future really is Diamond OA , it may cease to be a business at all . This will please some and not others , and the fault lines became clear in the final session . Under the chairmanship of Tracey Brown , the Director of Sense about Science , Geoffrey Boulton of Edinburgh University and Kent Anderson of Caldera Publishing debated what had gone wrong . Would that they had debated what to do about it , because there is now a tendency in these sessions to search for a villain . For Professor Boulton the universities are to blame . They created the “ publish or perish “ world and cannot retreat from it fast enough . It is they who have “ lined the pockets “of major publishers with profits from articles read by “ about 0.5 readers “ per article . For Kent Anderson it is the techno-utopians ( a term he has kindly used on me in the past !) and the Tech companies . Academic Publishers are the virtuous providers of journals “ whose focus on rigour and quality “ is so lacking elsewhere . He points especially at the irresponsible pre-print servers . ( “ MedRxiv and ArXiv are funded by Facebook essentially “). At times , while condemning Google and Facebook for amplifying conspiracy theories it almost sounded as if , by connecting Steve Bannon, anti-vaxxers , CERN and predatory journals in the same context , we were knitting a few of our own . A sad lapse in a very interesting session .
It was a really interesting and informative five days , with many voices heard that are normally silent or ignored . Our urgent needs for finding more effective means for evaluating research and researchers , for giving scope to the ongoing evaluation of science research as it changes over time ,for ensuring and recording its reproducibility and safeguarding its accessibility, and getting the evidential data in place and reusable by both man and machine engages all of us in scholarly communications – publishers and software developers and data and analytics companies as much as researchers , funders , institutions and librarians . Our path to a new concession will be eased if we concentrate on the debate and avoid the smears . ( https://oai.events )
]]>In the age of Open Science , nothing seems more natural than the opening up of processes that have hitherto been closed within university , departmental or research team practices . In the last five years the academic conference has become increasingly better covered by networked services .Think of Underline Science , or , over a longer period , of Riverview . Then add in Morrissier , reproducing the meetings and then using the data derived from posters and conference proceedings as indicators of progress in early-stage research . The ability of players like the latter to add the content to the citable research record through DOIs and make transcripts as searchable as any other content in the scholarly communications workflow is a huge step forward in process transparency .
But if we thought this was the end of the story , Cassyni ( https//.Cassyni.com) proves us wrong . Its founders , one of the most credible teams in scholarly communications , point out that below the level of formal conferences are a huge volume of scholarly seminars . They note that the pandemic drove these into Zoom , which suddenly created some benefits of its own ( sharing thoughts with other groups , showing how departments or research groups worked as a promotional or recruitment tool ) , but they also believe that less than 10% of these sessions are now searchable or retrievable by third parties . And they think that the number of such sessions globally could be up to a million . So the founders of Publons , Kopernio and Mendeley saw a new challenge in front of them – creating a sustainable business to provide standardised tools to record , transcribe and create searchable files of such seminars , add DOIs and metadata to improve search effectiveness , and eventually to index not only past and current seminars , but provide a signpost to future events . And , since comprehensive cover is important , they also recognise that some institutions will want a private or embargoed service – Open always has to live with ‘sensitive’.
This huge quantity of academic presentation and debate , in the Cassyni toolset , will get a Zoom video for each seminar , with the organiser adding an abstract , and the slide deck(s) will be attached . The DOI and metadata will be available within CrossRef , and so publicly searchable and citable . They envisage material coming from university departments running series of meetings designed to keep everyone updated and to stimulate fresh thinking ; from inter-disciplinary and inter-institutional seminar efforts promoting knowledge exchange and co-operation ; and from society journals and other journal editors who seek to explore and develop new topics . Indeed , it is easy to see Cassyni developing as an editorial tool , since this is surely how most journals were created , or bi-furcated , in the first instance . The Cassyni team are surely exploring the primordial soup from which journal life on Earth first sprang ?
Who learns what from all of this activity – another huge searchable store of primary science research material ? Certainly seminar organisers will get a great deal out of it , since the feedback on topics and techniques will be rich , and full of tips on making such series really hum . One can imagine that the journal editors and publishers would respond to clear signs of where the next papers are coming from – this is embryonic research indication – and what new topics were flying . Universities will use their seminar series popularity as a promotional tool ( ‘ most downloaded ‘ etc . ) and individual researchers will list their searchable seminar sessions amongst their publications – and their citations . But , at the end , Cassyni stands or falls by its utility to the individual researcher , by the delight of being able to hear exactly what was being said when Slide 10 was on the screen , or what the response was to a particular issue in Q&A when it arose in any seminar series .
The Cassyni system has been widely trialled in beta ( the views of Peter Vincent at Imperial College , London are particularly interesting ( https://blogs.lse.ac.uk/impactofsocialsciences/2021/09/01/rethinking-the-research-seminar-for-a-post-covid-world-with-cassyni/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ImpactOfSocialSciences+%28Impact+of+Social+Sciences%29 ) It is now fully installed at Te Herenga Waka—Victoria University of Wellington in New Zealand as a partner site . The Journal of Computational Physics ( Elsevier ) is an early user . As we salute the continuing inventive energies of Andrew Preston , Ben Kaube and Jan Reichelt , one also has to wonder about two things : once it is widely used , will Cassyni by its very presence alter the nature of the seminars and the communications that it records – and are there still any unexplored territories left in scholarly workflow and scholarly communications that need this exposure to the digitally networked world ?
]]>SPEEN WRITERS FORUM
Everyone likes a story; Everyone has a story to tell; Everyone can write
As part of the SPEEN FESTIVAL 2021 we invite you to a taster event that has the opportunity to become a local hub for creative writing.
FRIDAY 10 SEPTEMBER AT 7 PM AT SPEEN CHURCH
A new venture for anyone who has ever thought of writing something – a story, a poem, a novel, a memoir, a blog, a podcast…how do you get started , what local support can you get, how do you reach an audience ?
Come prepared : We ask participants to come and be happy to talk about a remembered experience – and any books or writers that they really enjoyed . The meeting will be informal, interactive, and driven by its participants. It will offer an opportunity to write, and also to learn how to read and provide constructive feedback.
Age is no barrier – our opening agenda includes village efforts to use oral history interviewing to capture village memories from those who now find writing difficult , right through to using social media like Discord as an outlet for creativity.
What do you need to do NOW ?
1 Engage with your family /friends to spread this invitation and come and join in our inaugural effort .
2 BUY YOUR TICKETS from the Festival Box Office at boxoffice@speenfestival.com
3 When you have booked you will get an agenda suggesting some of the areas we can discuss if the meeting wishes to do so .
Proceeds from the event and book sales associated with it go Speen Festival and the Speen Church Charities .
Hosts and Organisers. Anne and David Worlock;
]]>