Collective intelligence and AI governance

Maria Luciano

The Democracy Network has organized a series of AI & Democracy webinars, and I attended the “Using Collective Intelligence to Govern AI” one. Flynn Devine, an Open AI grantee, presented his Recursive Public project, which uses a consensus-finding digital democratic process pioneered by vTaiwan to set the agenda by mapping the big questions everyone thinks we need answers to.

Interestingly, Flynn started his presentation by tackling my concerns after the Data & Society webinar: “we need participatory practices in AI that can move at the same pace as technology and societal norms, and adapt to make sure outputs maintain relevance and power.” So, what would it look like to surface and reintroduce interconnected deliberation processes once a process has been completed?

He went on to explain his use of “recursion” in four points:

  • Dynamic issues surfacing: continually prioritise and address these issues as they arise, rather than being tied to a stagnant agenda.
  • Process chains for mass participation: a series of interconnected deliberations to facilitate mass participation and assurance that the collective output remains representative and updated.
  • Time-fluid conversations: continually feeding back outcomes into new deliberations, the recursive public ensures that the collective intelligence remains current.
  • Multiple audiences and outputs: engage the many key stakeholders and continually refine these outputs, ensuring they resonate with the intended recipients.

The results of this project are yet to be shared, and my curiosity about the possible points of consensus continues to grow. One thing I’ll look forward to observing relates to one of the findings from a Collective Intelligence Project with Open AI around democratizing risk assessments, also presented by Flynn. When investigating “what does the U.S. public want to mitigate and measure when it comes to LLM risks and harms?”, they noticed that people mostly care about creating good governance infrastructures into these processes rather than analyzing any specific risks. It seems to relate to the lack of trust in the governance systems being adopted by private companies, which tends to steer the conversation away from concrete risks and harms.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more