Weeknotes

Tim Davies

Tim Davies

Tim Davies

Exploring a Global Citizens Assembly on AI

I’ve been following up the first round of interview invites for our design lab on options for a Global Citizens Assembly on AI this week, and carrying out some of the first conversations and interviews. It’s a real pleasure to be collaborating with Claire Mellier of ISWE on the project - who is providing insight into a whole universe of thinking and practice on democracy and AI that was off our radar. I’m also delighted that Connected by Data Fellow Kristophina Shilongo is joining us as an advisor on the project.

It’s early days for the work, but I’m left at the end of this week reflecting a lot on the issues, and touchpoints, for global citizens dialogue on AI. The landscape of AI governance is increasingly crowded (GPAI, OECD, UNESCO, UN Advisory Board on AI, G7, G20, AI Safety Summit etc.), but without a settled hub - and yet, when we think about specific issues of concern, from copyright to climate impacts, or democracy to employment, there are a whole range of other global institutions (WIPO, ILO, COP etc.) that might also be relevant spaces for messages emerging from informed and focussed dialogue on AI.

I’m looking forward to a few more interviews in the coming weeks, and then starting to synthesis some initial notes.

Local Government in Focus

On Tuesday I put together a set of slides, then presented on Wednesday to the LGA’s AI Network of around 100 council officers. I’ve put my full speaker notes from that talk here, and the discussion that followed was really stimulating. I was particularly happy to build on the work I’ve recently done exploring a participatory and experimental approach to AI adoption with voluntary and community sector organisations in the articulation of a ‘Participatory Pilots 1 - 2 - 3’ that tries to set out a simple framework for involving affected communities in decisions around adoption of AI.

One of the joys of engaging with local government is the profound combination of commitment to community, and practical, pragmatic working, that it brings. By the end of our discussions, we were exploring how to approach AI as the hook for wider discussions of service reform - both moving upstream to think about data infrastructures, but also the whole shape of services. I’ll hazard that in many cases, robust participation sparked by AI may result in less AI, and instead more nuanced service reform.

I was also glad that, by drawing on the draft Governance Stack game, I was able to bring the importance of worker involvement into the conversation - which was a theme Jeni and I picked up in conversations later in the week with a team of academics exploring a potential responsible AI project with a city council. Recognising that the distribution of attitudes towards AI in a workforce should be taken as a service design consideration (i.e. there may be staff who won’t want to adapt to an AI-enabled way of working, and others who do) may enable more nuanced approaches to AI introduction as an oranisational change process.

Law Tech

This week the Law Society shared the interim report from their 21st Century Justice project including a section on “Protecting legal services consumers in the age of AI” that draws upon a workshop I had the pleasure of participating in back in February. It’s good to see the report note that

“The attendees felt that the Law Society could play an important role in elevating the voices of civil society which are currently under-represented in conversations about the use of AI in the justice and legal services space.”

although this stops short of also noting the need for infrastructural organisations like the Law Society to also support the direct voices of affected communities in decisions about future regulation of the AI legal services market. The report also implicitly hints at the importance of robust deliberative public engagement, noting that:

“Studies from the United States have identified that consumer appetite for using generative AI tools to give legal advice only increases once people have tried them. Yet research also shows that information returned by generative AI tools is often inaccurate where they have not been trained specifically on legal data from the jurisdiction in question.”

This juxtaposition should underscore the importance of engagement processes that both support the public to gain experience of AI tools and to hear expert testimony of their strengths and weaknesses, alongside lived experience of peers, with opportunities to weigh these different opinions before going ‘on the record’ with judgements about the desirability or regulatory guardrails needed, around new products and services.

Children’s Voice

On Thurdsay, I joined members of the Berkman Klein Center community for a session of the Berk Kidz discussion group: a small transatlantic gathering of lawyers, academics and practitioners discussing the potential impacts of the EU AI Act on children and young people, and digging into the implications of creating, curating or regulating LLMs for children. Two things particularly stick out as I reflect on the session:

(1) Exploring questions over when and where children should have access to LLMs is instructive for thinking about the more general question of what it means to make an informed judgement about when and where to use an LLM? What does it take to equip people of all ages to make a competent judgement about AI adoption - and to consider the trade-offs both for the immediate task at hand, and the developmental trade-offs with completing a task quickly, vs. developing skills and craft. I suspect there’s a read-across between this and the participatory pilots concept I was exploring the my talk to the LGA AI network on Wednesday.

(2) Tailoring an LLM to be ‘age appropriate’ needs to be the product of a participatory process that can work with both educators and children and young people to create something that helps children and young people to pursue goals they value. The default idea of an ‘age appropriate’ LLM might often be around adding filters and gatekeeping to restrict ‘inappropriate’ content - but a developmentally appropriate LLM, rooted in the principles of the UN Convention on the Rights of the Child, would need to also be tailored to support children’s evolving development of identity, freedom and autonomy.

In thinking more directly about the EU AI Act’s implementation, and how it will impact children and young people, our discussion also touched on the question of whether there are any suitable structures in the EU to ensure the voice of children and young people is feeding into the unfolding governance arrangements to implement the act, and whether institutions like schools and colleges are equipped to implement the spirit of the act. Similar questions might be asked when AI governance arrangements in the UK finally advance…

Reading

I’ve got a long reading list to catch up with, but two things I’ve been looking at this week:

  • Keywords of the Datified State from Data & Society - which does a fantastic job of getting deep into how datification changes the nature of the state, and demands more, not less, participatory practice. I’ve not finished reading all the chapters yet - not least because each one is packed so full of provocation and insight that many paragraphs deserve re-reading a number of times to let it soak in.
  • This Fact Sheet on AI and Independent Living recently published by Maitreya Shah and providing a clear, concise a rooted summary of how AI and independent living interact - drawing on evidence from participatory workshops with persons with disabilities. In touches on the benefits AI can bring, and data on how widely it is being used for independent living, before looking at privacy risks, and wider social consequences that we risk from AI system design, such as undermining long-fought for de-institutionalisation of persons with disabilities. It’s a great reference point for what it means to think about the needs of particular affected communities.

Life

I’m looking forward to talking alongside other Fellows at the Hawkwood Collect May Day festival here in Stroud on Monday.

I spent today finalising and submitting an application for Mozilla Foundations Senior Tech fellowships, based on work I’ve been scoping on Common Algorithmic Transparency Standards.

And, after years of planning, we finally got work started on some major house rennovations that will be lasting for the next five months! So the week has involved a lot more scaffolding and floorboards coming up than usual - and I suspect if I’m on calls with you in the months to come there will be a larger chance of background noise.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more