As part of our design lab scoping options for a Global Citizens’ Assembly on AI (GCA on AI), Tim attended the launch of the findings from the Belgian Citizens Panel on AI at the Residence Palace in Brussels.
The Belgian Citizens Panel was composed of 60 randomly selected citizens invited to take part in three weekends of learning and deliberation in the first part of 2024 on question around the evolution of artificial intelligence in Europe. The Panel was organized as part of the Belgian presidency of the European Union, and was the first time a citizens assembly has been organized in this context.
The panel produced nine key messages, many of which resonate with the findings of our People’s Panel on AI held in November 2023, and that reflect the nuanced and sophisticated nature of citizen dialogue on AI.
The launch event
At the launch event, over a three-hour session, members of the citizens panel presented their findings, and then took part in a series of discussion panels asking questions of invited guests, including members of the European Parliament, industry representatives, representatives of the Belgian EU Presidency and the Belgian Minister of the Interior, International Reform of Democratic Renewal. During questions from the floor citizens panel members and audience members raised questions about where this work goes next, and how it will have influence. I put a question around the potential for global deliberation to build on this EU experience.
Reflections, take-aways and questions
Reflecting on the launch and the full report of the panel (which is well worth a read), particularly in light of our GCA on AI workshop the day before, I was struck by a couple of things.
Assemblies as a public communication tool. The report notes that “We are surprised at the limited interest of the Belgian media in our panel, especially given the media’s major responsibility to inform the public”, yet during launch event discussions a number of speakers talked about the potential role of citizens assemblies to increase public awareness on complex issues like AI. There was a limited media presence at the launch also. We also noted in the People’s Panel report and independent evaluation that we didn’t quite land the mainstream media coverage we aimed for. What design features could lead to assemblies and deliberative processes that are better able to interface with media, and contribute to broader public education?
Difficult topics for discussion. In yesterday’s workshop exploring the hypothetical design of global deliberation on AI, one group settled on the governance of Lethal Autonomous Weapons Systems (LAWS) as a subject matter. However, in citizen panel feedback today, one presenter noted that they decided not to dig into questions of AI in defense, in part, as it was put “because as citizens we are against war”. This may be possible to navigate in a dedicated dialogue on LAWS, but this was a notable input from the Belgian panel to reflect on.
The geopolitics of AI. The report of the panel clearly reflects a rich discussion of the economic system and power relations around AI, including in particular questions of monopolies, multinational companies, and European economic sovereignty and competitiveness. The report concludes that ‘At the level of geopolitics: Monopolies outside the EU are a threat to Europe’s economic system and economic position’ and explores what might ne needed for the EU to ‘be on top of the game’. Building also on discussions yesterday about the importance of addressing economic and geopolitical power relationships when thinking about global deliberation on AI, this raises interesting questions of whether global deliberation will be able to find framings for governing AI that transcend economic (and wider) competition between geographic blocks, whether such competition should be treated as a constraint on the design, or whether it will simply make some areas difficult to dialogue on globally.
The role of the public in risk assessment. In the Panel’s message on deepfakes and unreliable information the report notes that “we were surprised to learn that deepfakes are consider low risk in the AI Act” and they go on to recommend these are consider high risk. Whilst there was some discussion during the launch about whether the AI Act does already provide adequate high-risk classification for certain political uses of deepfakes etc., the more general point of interest here is to ask how far risk assessment processes under the AI Act, or other AI governance mechanisms, factor in input from citizens as part of their assessment.
Global focus: social justice and human contact. The last two recommendations of the Citizens Panel are, I think, particularly instructive for thinking about future opportunities for global dialogue. Putting aside the framing of the report that EU initiatives should provide template for the rest of the globe (something that has already been coming up in my interviews for the GCA on AI project as a problematic discourse in many areas of the world), the Citizens Panel set out a vision for global agreement in three areas which I’ll quote in full:
- Ethics: using AI in a non-harmful way, controlling information flows, and basic rules e.g. for the use of AI in armed conflict etc.
- The climate impact of AI
- Social justice: the desire to leave no-one behind in this transition, and not allow the digital divide to grow wider. ‘Leave no-one behind’ and ‘Every human in the loop’ must be guiding principles.
And the final message talked about the importance of human contact, but does not unpack this in any depth.
In yesterday’s workshop we explored a potential framing of dialogue around ‘the role of humans’ in a context of AI: opening up opportunities for both practical conversations (e.g. what tasks should or should not AI systems be permitted to be used for) and richer dialogue around values that should guide AI governance. This can perhaps speak to questions of social justice, ethics and human contact in useful ways.
Where next?
Citizen Panels are just one of many tools that can be used to center voices from the public and from affected communities in the governance of data and AI. And their impacts are not automatically realized without ongoing engagement and advocacy.
At Connected by Data we’ll be exploring more the potential opportunities for global deliberation on AI through our GCA on AI design lab in the next few months, as well as working on community advocacy and campaigning at the grassroots level.