We joined forces with Jed Miller of 3 Bridges and the Accountability Lab team to host a discussion in Washington, DC on Tuesday, September 24th at 4 PM ET on the potential and the practicalities of bringing citizen voices from across the globe into AI governance
During the workshop, Tim presented our newly published paper exploring Options and Design Considerations for Global Citizen Deliberation on Artificial Intelligence.
An adapted version of the presention is included below.
Good afternoon all. Thank you for joining for this discussion on the role of citizens in AI governance, drawing in particular on a paper we’ve just published along with ISWE Foundation and drawing on our design lab on options for a global citizens assembly on AI.
I should start by saying that Connected by Data is not solely focussed on Citizen’s Assemblies: we see them as one method in a much wider toolbox of approaches to give voice to affected communities in the governance of data and AI. However, we had the opportunity to work on this paper with partners from the Coalition for a Global Citizens Assembly, producing an options paper to parallel an earlier paper on climate assemblies, and a new paper on global deliberation on health.
I also want to set the scene for this particular presentation in the context of the UN Summit of the Future, which a few days ago, agreed the Global Digital Compact, that sets out a part of future proposed architecture for global governance of AI, with an International Scientific Panel, and Global Policy Dialogue. These are ideas that were put forward in the Governing AI for Humanity report of the UN Secretary General’s High Level Advisory Body on AI, and published last Thursday. The video here gives you a small flavour of how that was framed.
In Governing AI for Humanity there is a diagnosis of a governance gap for AI. This is, however, primarily framed as a geographic governance gap: with AI agreements dominated by a small number of countries. So, whilst the call for a broader international scientific panel, and a regular global policy dialogue on AI is intended to create more seats at the table for majority world governments, the report neither diagnoses, nor proposes, solutions to perhaps the bigger problem: the participatory governance gap.
AI governance is, in the dominant frameworks now put forward, predominantly a matter for companies, governments, and selected experts. Citizens, and in particular, citizens from the margins, are not represented.
There are of course, many correctives to this that can be put forward, such as the civil society co-chair framework adopted by the Open Government Partnership, or reserving spaces within panels for citizen representatives. However, these approaches are still only able to include a small set of global experiences on AI.
So, it is in this context that we might look to the governance innovation of the Citizens Assembly. The image on screen here is from the first Global Assembly on the Climate and Ecological Emergency run in 2021 for the Glasgow Climate Conference of Parties: and it shows transnational online deliberation, with people from many different countries joining in dialogue together.
There are a number of important elements in the design of a citizens assembly. Firstly, participants are general selected through sortition, a form of random stratified sampling that seeks to remove sampling bias. Secondly, participants are offered access to balanced expert inputs, with oversight of those inputs from a governance group who again seek to limit biases being introduced. Thirdly, participants have time for facilitated deliberation: talking to each other to combine the inputs they have heard with their own experiences, and to come to informed viewpoints on the questions at hand. Lastly, the assembly produces some sort of output, either consensus recommendations, or another form of report.
Hundreds of citizens assemblies have taken place across the world, and there is growing experience in the field of transnational deliberation.
The model has a number of attractive features:
- A legitimacy claim - through sortition-based recruitment to represent ‘informed public opinion’;
- Inclusive and diverse discussions - bringing a far greater range of lived experience into debate on issues that have broad societal impacts;; Depth of discussion & problem-solving - rather than simply polling public opinion, or looking at predefined policy options, the facilitated discussions of a citizens assembly can explore issues in depth, and propose new solutions;
- Protections against capture - because participants are not seeking re-selection or re-election, and are not tied to specific vested interests, they are less likely to be vulnerable to capture by private or specific government interests, and the diversity of the collective provides a further buffer.
There are, of course, ways of adopting elements of a citizens assembly without adopting the full model. In the presentation that follows we’ll talk a lot about deliberative processes rather than citizens assembly, in order to avoid a dogmatic methodological purism.
Before getting, then, into how deliberative models can apply to AI, let us make one more stop at climate.
Because, on Monday, the National Secretary for Climate Change at the Ministry of Environment and Climate Change of Brazil (hosts of COP30), and the secretary of the Intergovernmental Panel on Climate Change (IPPC) spoke at the launch of the Global Assembly on Climate: setting out a path towards institutionalisation of transnational citizen deliberation in the global governance architecture are climate action.
And with the Global Digital Compact modelling the AI Scientific Report and Global Policy Dialogue on the IPPC-COP relationship, it raises the question: if the established climate governance space has recognised the importance of a citizens assembly as a two-way channel for citizen input to global governance, and citizen mobilization in response to global governance, why not the emerging AI governance space?
Let’s get into the detail then. This new paper is the product of a design lab process, run as part of Connected by Data’s Growing Data Governance Communities programme funded by The Omidyar Network.
Starting in May 2023, we spoke to 15 deliberation and AI governance experts, held three workshops and carried out an extensive desk review. That resulted in a framework for thinking about the potential design of global deliberation on AI, summarised in a set of five options, and with four closing recommendations on power-aware ways forward.
Along with my co-authors, Claire Melier, Richard Wilson and Kiito Shilongo we’ve tried to bring together our learning as a practical document that provides an overview of the kinds of questions a global deliberation on AI could address, the kinds of political processes and institutions it could feed into, and the ways it could operate.
Throughout the document we give examples of existing practice, to show that the options we set out are, with the right backing and resources, eminently deliverable. We also include a set of pros and cons for different options, and design considerations to take into account to deliver inclusive and effective deliberation.
As we come to the point of sharing this report, we’re aware that a number of projects are already making steps towards delivering elements of global deliberation on AI. In this light, we hope that the models and issues we set out are also useful to start to think about evaluating, as well as designing, deliberative AI governance.
All this noted, let us get to the first framing question for a Global Assembly on AI: what exactly should it focus on?
If we take as read for the moment that a well-designed deliberative process can get people up to speed in order to have a high quality informed discussion on almost any topic, then there are hundreds upon hundreds of possible questions that we might want to put to a global deliberative body on AI.
But as we talked with interviewees, we found the kinds of questions suggested cluster into three broad groups:
- High level vision and value questions
- More specific questions of governance and regulatory intervention
- And application or use-case specific questions about how particular kinds of AI should work, or where they should be used.
Each of these levels might lead to slightly different kinds of outputs.
For example, values and vision questions are more likely to result in principles, recommendations, shared visions of priorities for funding.
A governance or regulation question might provide outputs at the level of proposed rules or procedures, or might provide background material for regulatory action by showing how public attitudes towards AI governance are similar or different by geography, culture or other factors.
And at the level of applications, assembly outputs could be taken as the inputs to technology development itself, or could provide use-case specific guidance.
In the report we give a couple of example questions for each of these categories.
I want to specifically note that, for the first one here, “How do we live good lives alongside AI?”, we discussed in one of our workshops how it would be important to frame the discussions under this question such that resistance, and choosing not to integrate AI in ones life, should be a clear option on the table.
This highlights that, even after questions are identified, there is a need to think about the way they are presented and unpacked. In the report, we look at the importance of paying attention to deliberation agenda-setting, and to governance of the information presented to participants.
Another issue that has come up in discussing the report findings has been that of the ‘shelf-life’ of answers. While a vision for living alongside AI might remain current for a number of years (depending on views about the extent of disruptive change future AI developments might bring), safety assessment methodologies might, for example, need to be adapted more rapidly as details of technologies change.
Shaping questions to generate answers that are not so general as to be uninformative, whilst ensuring that they are not overtaken by events, is a critical challenge and one particularly acute for this current moment of deliberation on AI. One key means to calibrate on this point is to identify if and where a deliberation is docking with AI governance institutions.
The current AI governance landscape is complex and evolving. Against a backdrop of competition between big AI powers (both state, and corporate), and an explosion of AI ethics principles and voluntary codes a range of global institutions have staked a claim to coordinate AI governance.
The slide maps out an alphabet soup of organisations, from the generalist UN and international development system and its specialised agencies, to technology focussed institutions, regional bodies, multi-stakeholder, multilateral and industry groups. Civil society, academia and trade unions are also considered as docking points able to leverage public deliberation in feeding into policy making.
In thinking about docking for deliberation on AI we need to think about moments as well as institutions.
Although many of these AI governance spaces have been developing for years, breakthrough awareness of generative AI in 2022 accelerated the search for coordinated governance. The 2023 UK Bletchley Park Summit advanced the idea of a distinct regular (six-monthly) global policy forum specifically on AI governance (initially framed around ‘frontier AI safety’, though quickly broadened), and initiated the creation of an international scientific report on AI safety. The Global Digital Compact build on this with calls for a regular global policy dialogue on AI, backed by a scientific report.
Even so, the AI world does not yet, or may not ever, have it’s Climate COP equivalent.
In the structural options for an assembly that follow, I’ll highlight some of the potential places that different forms of assembly could dock.
For example, we might think that there is a pitch to be made to the Global Partnership on AI that it can secure its role as a genuinely global partnership by commissioning truly global citizen dialogue on AI governance. Although we may question what influence GPAI will have on international AI rules, as opposed to voluntary commitments. Similarly, we might think that, given the current power dynamics around AI, regional docking, particularly in the global south, offers the best route to address corporate capture of the AI debate, and to strengthen distinctive citizen voice within block-to-block negotiations: responding to concerns raised in a couple of our interviews about the brussels effect leading to regulation that does not respect or reflect diverse local contexts.
So let us turn then to that question of form. There are lots of different templates for how a global citizens assembly could be delivered, depending on the importance you place on in-depth real-time transnational conversations, vs. highly scalable or distributed parallel dialogues that are then aggregated together.
In the report we’ve sketched five rough options that I want to very briefly outline now.
For each of these options I’ll give you an example of the kind of focus question it could address, before stepping you through a possible design.
Option 1 starts from a governance a regulation question.
Globally, we’re seeing the creation right now of an AI Safety Apparatus… but relatively little public input to how it should work - and particularly limited input from the global majority.
This model takes as its starting point the fact that decisions are shaped and made, at least part, through global convenings.
So - it would aim to dock into the Global AI Policy Dialogue proposed by the Global Digital Compact, or into the continuation of the AI Safety Summit series.
In this option though, we’ve recognised that getting 100s of global citizens to a summit might be prohibitively expensive, and so we’ve sketched out a model based on regional panels - recruiting a cohort of members through sortition - and meeting online in advance of the summit.
Responding to learning materials based on the summit agenda, regional panels would debate the topics of the upcoming event, and suggest questions and priorities they think need to be addressed.
They would then select delegates from each regional panel to represent them at the global summit.
This brings public presence into the global event, and the selected delegates then get to engage in transnational conversation with each other, before producing a collective review of the summit, and feeding back to regional panels.
The loop then closes with the global and regional panels presenting their response to the framing question, and their review of whether the summit addressed public concerns, to an audience of summit stakeholders.
Option 2 takes on a big values and vision question: how can we live good lives alongside AI?
Very much modelled on the global assembly, this option combines the approaches of global sortition to create a core assembly, with self-organised community assemblies and a cultural wave.
The key element of the core global assembly is that it involves transnational deliberation: relying on interpreters and facilitators to bring together one conversation with participants from many different contexts.
This should then dock into either AI-specific or thematic UN institutions or multilateral networks. One particular option might be to focus on the Global Participation on AI (GPAI) now hosted by OECD, but currently lacking strong presence of voices from the global majority.
Through community assemblies and cultural wave, the independent global assembly seeks not only to influence decision makers, but also to build civil society capacity, and foster wider public conversations about AI governance in each country participating.
Option 3 places the emphasis entirely on this distributed community level dialogue - and might address a question such as “What global or local rules should govern AI?”
Modelled on the We the Internet dialogue, which docked into the open multi-stakeholder Internet Governance Forum, this option relies on recruiting a network of country partner organisations who would each manage recruitment and facilitation of their own parallel dialogue events.
A central group would supply learning materials, but with an invitation to local partners to adapt these, and to add locally relevant questions to the agenda also. Local partners take care of translation of materials - and discussions take place primarily between participants in-country, rather than transnational discussion.
However, because 10s or 100s of different sites are hosting parallel dialogues in the same few weeks, following a common outline, their findings, summarised quantitatively and in qualitative transcripts or summaries, can be brought together into a global report, presented to global institutions.
At the same time, a big emphasis of this model is on capacity building with local partners, and building a network of partners who have been part of a shared dialogue about how AI should be governed.
Option 4 makes use of AI to discuss AI, and looks at how a global assembly on AI could respond to questions coming from AI firms, or global institutions interested in shaping detailed sector-specific regulations
For example, the question of when should AI systems provide medical or legal advice (this was one of the questions in OpenAI’s Democratic Inputs to AI project run last year).
This option is focussed on scale.
By inviting participants to take part in asynchronous discussions through an online platform, and using recruitment agencies to recruit demographically diverse groups of 1000s of participants, a platform mediated assembly could deliver input from potentially tens of thousands of citizens.
It could also have an open participation option, where anyone can join.
Because of the asynchronous nature of this option, learning takes place through self-directed engagement with videos or reading, and deliberation is primarily through writing response to questions, and voting on responses put forward by others.
To enable cross-language engagement, this option would turn to AI translation tools.
And in a process like this, all the discussions are well tracked: making it possible to break-down public responses to the question by demographics, by engagement with learning materials, or other factors.
However, this option is not entirely digital and asynchronous. As we workshopped this model, and explored in particular the experience of vTaiwan, we noted the importance of a synchronous convening phase. After digital tools have been used to understand the range of positions taken, and find bridges between them, there is value in bringing together people from different positions in the space to dialogue together on detailed policy proposals and ways forward.
Lastly, we come to option 5. This option involves addressing the relationship between AI and other significant topics.
For example, AI could be an element within discussions of climate change, or global health.
In this model, an expert committee would commission questions on AI, and provide relevant background materials, to be deployed within other existing deliberative processes.
For example - working through a global climate assembly to ask about AI and energy use.
This offers opportunities to integrate AI governance in thematic reporting, and to pull out AI-specific messages for AI governance audiences.
A particular benefit of this model is that it avoids a self-selection bias in recruitment: which risks only those confident talking about AI joining a process about AI. Instead, introducing AI to people who opted into joining an assembly on climate may generate a very different set of insights and responses.
With all these options in mind, we come to the important question of change.
This slide, adapted from the Global Citizens’ Assembly for People and Planet Impact Framework (ISWE Foundation), highlights that the routes to change for a global deliberation are not just through the direct recommendations, or docking into institutions, but might also come through its impact on participants own actions, on building solidarity, and on supporting learning at scale.
Indeed, following the Global Digital Compact, global deliberation may be as important as an element of capacity building as it is of closing the AI governance gap.
For any design for global deliberation on AI we should ask firstly which of these routes to impact it is able to adopt, and where it places emphasis.
We should also consider how the implementation of any global deliberation makes use of both insider, and outsider, tactics to secure change.
Docking close to decision making power can provide a more direct route to influence policy… but it is also vulnerable to political change, as in cases when the champion of a particular deliberation either loses their post, or faces political pressures that restrict their ability or inclination to act on deliberation recommendations. In these contexts, also having access to outsider tactics can ensure a deliberation process still has routes to create change.
The slide here also shows the different levels on which deliberation processes are often operating: both with immediate policy goals, and as part of a wider movement to embed deliberative politics as part of new governance models.
We close the paper with the four recommendations shown on this slide.
I won’t say much more about these right now, besides noting that our work in coming months is likely to focus in particular on point 3: building on the pockets of practice and research we’ve uncovered through putting this report together, and seeing how to contribute to a growing community of practice.
I also want to reflect in closing that, whilst we focussed in the report on the concept of global deliberation, many of the ideas we explore can be scaled to national or local levels too. Indeed, it might be that creating momentum for citizens to be at the table in the global governance of AI will be built through effective processes at state and nation levels first.
If you want to keep in touch with future Connected by Data work on deliberative governance of data and AI at all levels, please do subscribe to our newsletter.
To follow and support the campaign for greater global deliberation on AI, amongst other critical issues, please follow the Global Citizens Assembly Coalition.