Emily’s previous work in public libraries saw her invited, in her new role with CONNECTED BY DATA, to speak on a panel at the Libraries Connected Annual Seminar about ‘AI and public libraries’. The Libraries Connected audience is predominantly the Heads of Service / Chief Executive of the library services across the UK, with invited others including Arts Council England, DCMS, RNIB. Keen to take the opportunity to speak about the work of CONNECTED BY DATA and the principles that matter to us, in a context she deeply understands, Emily was confirmed as our representative.
The panel session was chaired by the CEO of Libraries Connected, Isobel Hunter, and Emily was joined by Sital Mistry-Lee (Associate Director of Digital Inclusion Delivery at The Good Things Foundation) and Jaselle Edward-Gill (Programme Associate at Mozilla Foundation). It was run on a chat show format allowing for free flowing conversations with prompt questions from the Chair and questions from the audience too.
In her introduction Emily spoke about the work of CONNECTED BY DATA, our vision and goals (as articulated in our strategic roadmap 2024-25) and gave an overview of the work we have completed on a progressive vision for data and AI policy, and challenging the proposed Data Protection and Digital Information Bill (which was dropped - thankfully - at the dissolution of Parliament). She also spoke about the People’s Panel on AI and the impact of that work. She recognised that over the last year we had learned: AI is where the conversation is at; there is no AI governance without data governance; we need to build collective power; and diverse backgrounds make for the best conversations.
Discussions ranged from fear, trust, and the digital divide, to AI literacy and the importance of public voice in AI development, implementation and regulation. All wrapped up in the library context of connection, community and care.
@emilyjmacaulay saying at #LCSeminar24 how Civil Society needs to have a voice in the governance and development of AI, and it not just be left up to Big Tech to decide with Govt. @libsconnected pic.twitter.com/Zc8eD8TWXD
— Nick Partridge BEM (@NJPartridge) June 5, 2024
Sital from The Good Things Foundation spoke passionately about the digital divide and the impact of AI on that divide - both the opportunities and risks. There is an opportunity for some increased accessibility (through tools such as audio transcription of screens / books / faces) and potential for ‘levelling the playing field’ when writing letters, CVs - particularly perhaps if English isn’t someone’s first language. It was remarked by Jaselle from Mozilla Foundation about the English / American bias of large language models and how the opportunities for those wanting to use the tools in a language other than English - how this risks increasing a divide for some countries.
Asked whether AI will make libraries and library staff redundant Emily was clear and categoric in her answer.
No.
Libraries and library staff are about people. About connection, community and care. Library staff are far more than AI could ever wish to be. Encyclopaedias didn’t make library staff redundant. Computers didn’t. Google didn’t. And that’s even before considering the range of skills and services offered in libraries that go beyond access to information.
The concept of “trust” was touched on repeatedly in the discussion. Libraries are proud of being a trusted profession and have a history of tackling misinformation. Does AI risk undermining that trust? What does AI mean for trust generally? Emily suggested this question needs to be cognisant of language around “AI”. There is nothing intelligent about what we’re currently calling “AI” (generative AI that is large language models like ChatGPT) that are simply vast, rapid processing of data and seeking the most common answer to questions. We’ve been using “AI” for years without questioning our trust such as spell check, grammarly, google translate (and even as an audience member pointed out - the old paperclip, ‘Clippy’, from Microsoft). The trust question tends to be more pertinent around deep fakes and when decisions by those in power are made on the basis of AI handled analysis. This is a concern and particularly around the election we are probably in a peak period for trust being eroded in content but we are seeing some organisations trying to be responsible in pushing back against AI generated content (that isn’t clearly identified as so). A recent small example was press agencies withdrawing Princess Kate’s family photo that had been digitally manipulated.
Photo credit: Sarah Mears (Libraries Connected)
Kester Brewin recently published “God-Like: A 500-Year History of Artificial Intelligence in Myths, Machines, Monsters”. In it he included an ‘AI transparency statement’ which covered whether any of the book had been generated by AI, had anything be improved using AI (including for example Grammarly suggesting edits), had any text been suggested by AI (like ChatGPT helping iterate) and had any text been corrected by AI (like spellcheck) and if yes, with or without human discretion. Acts like this will help to promote transparency, and with it trust.
Trust in libraries around AI will depend on how libraries use AI. If libraries start introducing AI processes or producing AI content without being transparent about that, or not for the benefit of their customers (and society more broadly) then trust in libraries will decrease - and it should. This risk can be mitigated by increasing public voice in decisions about how data and AI is developed, implemented and regulated. The more people are involved, the more that voice finesses how AI operates, the more trust there will be. Libraries could - and I would argue - should lead by example. Libraries planning to introduce AI into their internal operations should take the time to understand what data the system is doing, how to mitigate risks, and must bring in public (and staff) voices into the discussions about how / when and why to implement such processes.
When considering the fear the public may have around AI, all panel members agreed this was most often due to it being unknown and here libraries have a role to play. AI literacy is just the next step in digital literacy. AI literacy must be, for library staff - and for the people using the libraries - deeper than simply how to use it. We must enable library teams, library service management and all those coming to the library to learn the skills to critically think about AI, questioning not just how to use it, but if / when to use it and how it is working. Libraries have always held people’s hands through scary times, the introduction of public computers, changes in other technologies (such as self service machines), COVID. AI is just the next step in that journey.
Library staff can play an important role in providing a safe space for people to explore LLMs - learning through play. Sital shared a video after the talk that The Good Things Foundation has created as an introduction to AI.
Finally each panel member was asked what they considered to be the most important thing for libraries to think about in relation to AI. Emily responded that:
“The most important thing for libraries to think about in relation to AI is no different to their most important thing through every other lens - people. Libraries are strongest when they are radical and innovative and responsive to local need. Libraries grow when they meet the needs of the vulnerable, they flourish when they focus on community, people, connection. And AI can’t do any of that. So libraries need to be mindful about the steps they take in relation to AI and must engage their communities in those considerations. Introduce and embrace AI as a developing technology - as libraries have a proud history of doing - but do so with the intent of improving and supporting society, not solely for efficiency or just because we can”.
After the event Emily shared a couple of key links on social media that best underlined some of her comments in the discussions and responded to questions raised by the audience. These links were: a blog written for the Joseph Rowntree Foundation by her CONNECTED BY DATA colleagues Jeni and Tim entitled “Let’s give people power over AI”; and a report published by the Ada Lovelace Institute about public attitudes towards AI.