Participatory Governance of AI - Learnings and Opportunities (Open Government Partnership Global Summit)

Tim Davies

In a fragmented AI governance landscape shaped more by economic interests than public concerns, how can diverse voices contribute to shaping AI strategies, policies and frameworks? In this fishbowl session at the Open Government Partnership Global Summit we explored challenges and learning from across the world on approaches to give citizens greater agency in shaping the future of artificial intelligence in both public and private sectors.

This short note, drawing on a transcript summarised via AI and then edited by hand shortly after the session, highlights some of the themes explored. Discussions were initiated by a panel of contributors including: Margarita Yépez (DataLat, Ecuador), Rocío Suanzes (IE University, Spain), René Mendis (Social Change Factory, Senegal) and Fay Skevington (Department for Education, UK) who were then joined by attendees from the floor to contribute their own perspectiecs.

How can we centre the voice of affected citizens in public policy decisions about Artificial Intelligence?

  • Start from the basics: Several contributors stressed the need to “work on the basics” of good governance and public participation before applying them to AI, noting that participation often feels performative or that existing participation regulations and frameworks are poorly limited. At the same time, panelists noted that, because AI is understood as a fast-moving subject area, there can be opportunities for agile and creative public engagement around AI that don’t need to wait for top-down processes.

  • Recognise different stakeholders: Discussions explored the role of different groups in providing ideas and alternative visions for AI development: including young people, grassroots community groups and local start-ups and entrepreneurs, who may have different perspectives from larger firms developing and deploying AI. Thinking about the potential cross-sectoral alliances interested in fostering dialogue about the co-design of new approaches to AI can be productive.

  • Focus on language and accessibility: A significant barrier for both public sector and citizen engagement with AI can be the technical language used in discussions. Speakers talked about the need to find shared language across civil society, technical communities and governments, and to find simpler narratives that can engage startups, innovators, and the general public.

  • Connect participation with co-design, policy, procurement and ongoing governance: Participants shared a range of examples including: participatory co-design leading to new AI tools that better meet citizen needs; consultation with students and teachers shaping the direction of policies on uses of AI in education; potential approaches to involving the public in procurement design making; and establishing ongoing governance panels for citizen oversight of key data infrastructure. Through sharing method cards from the Good Governance Game around the room, audience members and panelists were invited to think about the range of creative different ways of engaging the public with shaping AI.

Facing the challenges

Although the session aimed to map opportunities to advance participatory governance of AI, we often returned to the challenges, including:

  • Exclusion of Citizens: Participants explored how that legislation development for new technologies, including AI, often favors technical experts and academics over civil society and citizens who are directly affected. This was framed as a mistake to avoid in current AI legislation efforts.

  • Institutional Inertia: There were reflections on how traditional government bureaucracy and its slower pace clash with the speed and disruptive nature of AI, making it hard to embed participatory practices.

  • Infrastructure and Cost: In some countries, the lack of technological infrastructure and the high cost of necessary AI solutions from large, often US-based, companies limit government choices and create concerns about data transparency and foreign control.

  • Lack of Agency: A straw poll in the room revealed that even within the Open Government Partnership (OGP) community, many do not feel they have the agency to steer how AI is shaping society.

  • Framing participation: A challenge was posed on whether participation is genuine when the solution (AI) is already assumed. It was argued that the focus should be on establishing rules of the road for AI governance rather than simply accepting its use, allowing for critique and rejection: and that we should frame questions openly to avoid an assumption that AI is the answer.

Approaches and inspiration

We heard examples of work from  Spain, Estonia, Ecuador, Costa Rica, the UK, Senegal, Taiwan, Indonesia and The Netherlands amongst other places: demonstrating the breadth of emerging thinking about how to involve citizens in governing AI - both from inside and outside of government.

Some of the examples looked at:

  • Co-creation and Youth Involvement: Speakers argued for moving young people from “beneficiaries to co-designers” and involving them from the beginning of all processes. Examples included deliberative engagement with children and teachers to inform policy (e.g., focussing on teacher-support tools over student-facing ones) and the concept of a “pupil-led governance model” for educational data.

  • Creative and Diverse Methods: Discussions explored how to move beyond simple consultations and surveys to a broader repertoire of approaches, including:

    • Legislative theatre and creativity workshops (UK example).

    • Advocacy coalitions and coordinated joint actions by civil society organizations (CSOs) (Chile example).

    • Developing alternative, responsible AI systems to critique existing ones and raise awareness among decision-makers.

    • Creating independent public repositories of AI use and algorithms to push for transparency, and enable public input.

In thinking about how to advance agendas for participatory AI in governments, discussions explored the need to find the right incentives and to build trust and relationships, recognising the diversity of experiences and ideas amongst officials, rather than treating the bureaucracy as monolithic.

We also explored the opportunity created by the fact that public officials and members of the public are often learning about AI at the same time: there is not necessarily the same difference of knowledge and experience as there may be on other topics. This creates a chance to learn and discern about AI together.

Where next?

The session ended looking forward, asking how the OGP can support further work to develop participatory governance of digital technology. Although we didn’t find all the answers: we explored the importance of the challenge, and ideas and insights relevant to meet it exist across the partnership.

For Connected by Data, as co-hosts of the session, we’re looking forward to further developing the Good Governance Game as a resource to support design of participatory processes, and to taking forward the Principles for Public Participation in the Procurement of AI launched earlier this week.

We’re also happy to share the outputs of our earlier research for the OGP on Mapping Participatory Digital Governance captured in this report.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more