From Local to Global
As I reflect on the last few weeks work at Connected by Data, taking in sessions on data and AI governance in United Nations HQ in New York, supporting grassroots community campaigns and planning for the place-based Gloucestershire Data Day #2, as well as facilitating direct public engagement, connecting with colleagues’ work on UK policy both at Labour Party Conference and through new advisory roles, and convening conversations on ground-up and top-down data narratives, I’ve been thinking a lot about the tensions and connections between the different levels we’re operating at.
Since launching our work exploring the potential of global citizen deliberation on AI, I’ve faced some questions and critique from people sceptical about global processes and governance. And when talking about grassroots public engagement, I’ve been finding some folk switch off, seeing local action as too small and slow to bring large scale transformative change.
But it was hearing from the ever insightful Nnenna Nwakanma during a Summit of the Future Action Day panel, that really captured for me the importance not only of each level, but also making connections between them. Listing out many friends from across the world, working on issues of technology governance, rooted in communities, but occasionally engaged in global spaces, Nnenna pointed to long struggles to build multi-stakeholder governance, reflecting though that “I don’t need a visa to implement the GDC [Global Digital Compact] … I want to be at home, and have the same principle of multistakeholderism play out in everything at national level.”
We do the global work in support of the local. And we root in the local, in order that our global action comes from solidarity.
In short, we’re seeking to build a world of participatory and multi-stakeholder governance ‘all the way down’. And back up again.
Multi-stakeholder and participatory
I find it useful to think about the work we’re doing through these two lenses.
In the UK, for example, we are supporting greater multi-stakeholder governance of technology by helping civil society organise, and facilitating connections between government and civil society.
As the Data and AI CSO Network has started to have more sessions with guests from different government bodies in recent months, I’ve been struck by the simple challenges govt can face in accessing civil society inputs, and the difference it makes to have spaces that support meeting.
At the same time, increased civil society presence in policy-making does not remove the need for direct inputs from people directly affected by policy. Hence participatory practice is a complement and important component of multi-stakeholder governance.
Which to focus on, and when, is often a strategic question.
Over the coming weeks we’re putting together a couple of funding proposals that focus on different aspects of the grassroots, local, national and global, and one of the things I’ll be working on for these is a clear articulation of how, in our work, these levels of action are intentionally connected and feed into one another.
Participatory uses of AI
We had a meeting of the Public Voices in AI People’s Advisory Panel this week, where one of our topics of discussion was the lifecycle of an AI Application, and where and how within this publics could or should be involved (introduced by Reema Patel as part of the creation of a framework for the PVAI programme). Building on discussions at previous sessions, we wanted to give panel members more of an insight into the people behind an AI system, and to think about how particular stakeholders might be influenced.
Within the constraints of our two-hour zoom meetings with the panel, we realised it would be tricky to bring along lots of AI-developers to speak, and we didn’t have resource to collect interviews in advance, so we decided to experiment with ‘AI generated expert testimony’. In part, this was inspired as a response to the kind of “Algorithmic proxies for participation.” Delgado et. al. found (§3.3.3) some AI developers turning to (asking AI systems, rather than people, for feedback on system design), and in part as a reflexive activity to also explore how the group felt about simulated testimony.
To create our ‘imagined experts’, I used ChatGPT o1-mini with prompts such as: “You are a data engineer called Amy. You work for a LawTech firm preparing data from courts to create an AI case outcome prediction tool. You are giving a talk to explain in 60 seconds what the job of a data engineer involves covering data cleaning, features engineering and transformations.”. With light editing (to get slightly more variety in the texts), I then fed these into Kapwing’s persona tool, which provides text-to-speech and video lip-sync functions, to generate videos like this one (using a low quality lip-sync algorithm for speed: we didn’t have chance to test the higher quality options). We provided the panel with scripts for each persona (that explained how they had been generated) and access to the videos in advance of our sessions, and then played the videos over the zoom call (where the poor lip-sync was more forgivable!).
In the panel session, the idea of an AI lifecycle was first presented by Reema using slides, and before the group heard from our video personas, we had a discussion about where in the lifecycle panelists felt public voice was important. A lot of the emphasis in the discussion at this point was on the data preparation stage. After we had shown the persona videos, the discussion shifted, with more people focussing on inputting to research. In part, this appeared to be because of how personable the simulated video presenter had come across, but also I think reflected different perspectives on the importance of the stages of the lifecycle having heard information in this different persona rather than presentation format.
We didn’t run this as a full experiment, though we lightly applied a participatory pilot model to explore AI capabilities, and reflect on them with those affected (the group felt it was an interesting way to engage with content; as facilitator I felt more conflicted about the value they brought). There is more to be done to consider whether this is an approach I’d want to use widely in future.
It was useful though to have in mind for a conversation later in the week with Rich Wilson and Claire Mellier of ISWE around potential uses of AI in deliberative practice. As Bianca Wylie argues here, using AI to replace the role of the facilitator in summarising and sense making transcripts and texts is fraught with problems. Indeed, while I think we might want to offer facilitators better tools to run and write-up deliberations, I’m not sure these need any AI elements. However, finding ways to aid participants to be informed offers more fertile ground. I think of Diplo Foundation’s playful experiments with AI-interfaces to classical thinkers as a template for extending access to expert testimony, while ensuring participants voices when expressed are allowed to retain their authenticity rather than being flattened through current LLMs.
Other breadcrumbs from the last fortnight
- I published a write-up of our research on communicating AI to local government carried out earlier this year.
- I’ve been plugging into plans for Gloucestershire Data Day #2 (Save the date for 16th December!) and getting excited about the potential for ‘Dash Battle’ - a battle of the bands style showcase of data dashboards from across the county.
- P.S. The team are still looking for Data Day sponsors to support artists commissions and webcasting. Do get in touch if you might be interested in helping out.
- I’ve been hatching plans for a possible research symposium on participatory governance of AI to host alongside the AI Action Summit in Paris in early Feb next year. Concept note and planning document here if you want to get involved!
- I’ve had some great follow up conversations from my time in New York, meeting yesterday with Robert Whitfield of the One World Trust and discussing some of the common ground, and dividing issues, for civil society engaging with AI governance, and ending the day with a conversation with Colombe Cahen Salvador whose transnational political organising Atlas Movement have made A Global Citizens Assembly on AI a part of their policy programme.
- I had a fascinating conversation with members of Just Algorithms Action Group around their work on climate impacts of AI, and carbon accounting the additionally of data centre development.
- I’ve been reading Richard Pope’s thought provoking Platform Land , which has been particularly interesting to reflect on in light of our recent Connected Conversation with Global Voices, which invited us to consider the instability of states, and risks of digital authoritarianism. Connecting a positive vision of the potential for human-centred public services, with protections against unchecked power, requires design at a variety of levels (and levels I should say I think Richard’s book is sensitive too, even if not focussed on).
- I’ve been working on a couple of funding bids: more on those soon.