A permanent Global Citizens’ Assembly: Adding humankind’s voice to world politics - AI Workshop
A Permanent Global Citizens’ Assembly: Options for an AI Focus
On 18th July 2024 we co-hosted a workshop at the “A Permanent Global Citizens’ Assembly: bringing humankind’s voice to world politics” conference at Jesus College Oxford exploring an early draft of our Options for a Global Citizens Assembly on AI paper.
Tim presented the paper and the stylised options it sets out for a Global Citizens Assembly on AI, before turning to a panel of respondents and group discussion.
Presenting the Options
The slides below (and their speaker notes) set out the research so far, and the five options explored (covered in more detail in the draft paper). These are put forward as a discussion starter, and to highlight the many different ‘moving parts’ involved in putting together a powerful and practical proposition for advancing global deliberation on AI, rather than as mutually exclusive templates.
Panel responses
We were privileged to have a great panel made up of:
- Professor Sir Nigel Shadbolt, Principal of Jesus College and Professorial Research Fellow in Computer Science, University of Oxford
- Dr Anna Colom, Senior Policy Lead, The Data Tank
- Reema Patel, Digital Good Network
- Professor Isabelle Ferreras, Professor of sociology at the University of Louvain and Visiting research fellow at the Institute for Ethics in AI, University of Oxford.
Nigel opened the responses with an emphasis on the fundamental principle that decisions affecting many people should involve many people. He pointed to the challenge, however, in finding resources to support these efforts noting that while philanthropy often directs resources to civil society, this process can be uneven and there is a need for sustainable funding models that can help us to meet the ambition of global public dialogue on AI. In his contribution, Nigel also pointed to the need to understand AI as much more than just generative AI, and to pursue serious engagement with both the public and the technical community to explore the values embedded in AI technology.
Anna called for deliberation to help us deliver a change in the narratives and discourse around AI, from development that is driven by a small number of people at the head of large companies, (often in practice a small number of men), to a conversation in which there is global discussion on the vision and values for AI. Such discussion should not just be about discrete technical questions of AI governance, but should involve broader conversation on the societal directions we want to take with AI, considering diverse systems and needs across the globe. Anna underscored the importance of an inclusive approach to question design, and building on learning from past citizens assemblies about the importance of processes that can demonstrate their integrity and legitimacy.
Reema offered three key design questions for creating a global citizens’ assembly on AI:
- (1) How do we define global? What global means in the context of an assembly will have significant consequences for it’s design.
- (2) How can different components fit together? The options set out in the paper and presentation could be seen as complementary tools to address different aspects of assembly development - from agenda-setting, to deeper deliberation. There may be a route to a connected, rather than segmented, approach.
- (3) Who sets the agenda? Reema stressed that the identity of the entity setting the agenda significantly influences an assembly’s dynamics, pointing out that if a tech company sets the agenda, it will be very different compared to a governmental or intergovernmental body. Reema also noted the challenge of having a global assembly without an obvious commissioner, suggesting that the governance and power structures of the process need more careful consideration.
Reema ended her input with the recommendation of establishing a diverse, global steering group that could run an open, inclusive process to identify and prioritise questions: building the legitimacy and mandate of the project.
Isabelle offered several critical observations, cautioning against the unchecked proliferation of democratic participatory innovations, noting that they can sometimes be co-opted by authoritarian regimes or corporate interests, potentially strengthening those systems rather than democratising them. She emphasised the importance of connecting a global citizens’ assembly on AI to existing institutions, ensuring it contributes to meaningful social change and democratisation rather than technocratisation and pointed out a tension between corporate-driven participatory processes and genuine democratic engagement.
Isabelle referred to tech companies like Meta, Anthropic, and OpenAI conducting their own forums and seeking democratic input, when often this is juxtaposed with centralising control and increasing surveillance. Isabelle particularly noted the importance of including the voices of workers from the AI industry in deliberation. These workers often have significant ethical concerns about the direction of AI development but are not adequately represented in corporate governance structures.
Group discussions
Discussions in the room, and on Zoom chat, provided invaluable feedback on the paper, and on wider work to explore global public deliberation on AI, touching on a number of topics, including:
Understanding the context, and foregrounding power. Technical development and standard setting are not value free. But dominant narratives are often controlled by large technology firms, and there are geopolitical dynamics around AI. Proposals for an global citizens assembly on AI need to be power literate. We need to consider both constituted and constituent power.
Subsidiarity: the global and the local. Tim Hughes noted that it is not yet settled which issues related to AI require global governance, and so any assembly design needs to be critically aware of the role it plays in constructing AI governance. A democratic principle of subsidiarity may be important to keep in mind.
Framing assumptions embedded in an assembly. Zoe Cohen pointed to questions and concerns about whether AI development should continue at all, considering its high energy use of potential impacts on life on earth, does holding an assembly framed in terms of AI visions or governance pre-judge that AI is of social value?
Governing learning materials. Discussants pointed to the importance of addressing bias and embedded assumptions on educational materials for an assembly.
Citizen vs. public. Discussants pointed out that the term citizen can be exclusionary, particularly in a national context, where migratory populations and others may be excluded from citizenship.
Alternative routes to impact. As well as docking with institutions, we can also foreground routes to impact through social movement, trade unions or other grassroot networks. In particular, Kiito noted the nascent networks of civil society in Africa who could interface with a global citizens assembly.
Pluralism and difference across culture. David Leslie proposed that the values embedded in participatory and deliberative processes, such as reciprocity and equality, should be made explicit, both to highlight the normative goals of a citizens’ assembly, and, importantly, to signal its cultural limitations. This point is particularly important when considering ‘local theories of change’ and how assemblies based on a global template may need to adapt, or face limitations in achieving impact, when docking with local institutional forms.
Listening and responding
In early August we’ll be updating the paper drawing on feedback from the session. Drawing also on informal feedback at the conference, a number of areas we particularly plan to look at include:
-
Process governance & advisory body. Including more on the need for any GCA on AI to have robust and inclusive governance both with respect to framing and question setting, and learning material design.
-
Integrating different options. Looking more at the particular strengths of different models for different aspects of deliberative governance: question setting, consensus building, mobilisation and so-on.
-
Foregrounding power. Strengthening the analysis of the current power dynamics in the AI field, and the challenges this presents for a GCA on AI, as well as some of the choices to be made in navigating this.
-
Exploring worker voice in participant and expert selection. In particular, asking whether sortition approaches should, for example, over-sample for AI workers in the global south (e.g. clickworkers providing the labour behind AI), or how their perspectives can also be presented through expert inputs to an assembly.
The 0-draft document remains open to comments. Thank you to everyone who contributed to the session.
An AI Note
_I’m experimenting with use of AI tools for session write-ups, and with adopting a transparent approach to this. _
This write-up was based on author recall (written < 24 hours after session), the meeting transcript, and selective listening back to the recording as divided by Zoom’s automatic summarisation. ChatGPT 4o was used to provide additional summarisation of points using the raw transcript. Responsibility for the final output rests with the author.
Prompts used:
- Please summarise the first contribution by [Speaker]
- Who raised points and what did they say in the discussion section?
- How was the topic of power addressed?
- Provide a bullet point list of all the critical themes raised
- Give me five challenging quotes from the transcript
Observations:
- ChatGPT surfaced some themes that were not in my notes or recall, and that I would have skipped over in reading the transcript, but that could be validated by listening back to the recording.
- ‘Quotes’ returned were heavily paraphrased and needed to be checked against the transcript;
- Some key themes were missed by automatic summarisation, but prompting for these yielded useful summary / reminder of themes to review in transcript.