

Tim attended a roundtable at the Foreign, Commonwealth and Development Office organised by the SAFE AI project.
Amidst a context of unprecedented crisis for the humanitarian system, facing growing demand, devastating cuts to funding and programmes, and erosion in respect for international humanitarian law and norms, the round table explored the additional challenge of humanitarian sector responses to AI.
I was asked to provide a few minutes of input during discussions on Power & Participatory AI, notes for which are reproduced below (edited based on delivery and to expand on a few points with links).
Setting the scene, Sarah W Spencer, SAFE AI Team Leader, outlined: The humanitarian community has, for decades, largely agreed that communities affected by conflict and crises deserve equal voice and agency in the design of interventions that impact their lives and well-being. These aspirations are mirrored across the AI community, in both industry, civil society, and academia. However, action has failed to meet aspiration on both fronts.
The question: How can we create opportunities and methods for more meaningful community participation in the design, development, and deployment of AI solutions while managing the risks associated with tokenistic participation and participation washing? What does “good” participation look like?
Earlier in our roundtable discussion, one speaker noted that “AI doesn’t work so well in contexts it’s not familiar with…“. We should take this as a provocative starting point. There are perhaps integral features of machine learning systems that can make them uniquely unsuited to humanitarian contexts, which are almost by definition about dealing with disruption and emergent crisis not captured in past data.
Of course, this is an over-generalisation. There are humanitarian tasks to which AI specific AI tools may be well suited: but we should not assume that AI from one context offers meaningful answers in another. Working this out requires participation.
When we talk about participation in AI, it can mean different things to different people:
- Participation in use - increasing the number of people with access to AI tools.
- Participation as an input to AI - as, for example, when we pair artificial intelligence tools with collective intelligence processes and community-collected data to fine-tune AI to different contexts
- Participation in AI design and governance - where communities have power over substantive decisions about the AI that gets built, deployed and regulated.
It is this last sense I want to focus on.
We’ve been asked to think about what makes for good participation. I want to focus on three aspects. Good participation:
- Is informed - not just asking people what they think of an issue, but creating space for learning, dialogue and deliberation.
- Has impact - if the people supposed to benefit from change can’t tell you about it, then we should question whether participation has been meaningful.
- Regenerative of fabric of democracy - our engagements should leave the civic sphere better than we find it. Too often participation in the development field (and in other spaces) has been extractive - and today it’s more important than ever that we are building, not eroding, our collective democratic capacities.
But how do we reach for these kinds of ideals. It’s tricky: but we’ve a wealth of learning from decades of public engagement, and from research and civic action seeking to shape our technology landscape. I have six brief points:
(1) Analyse power and incentives.
We need to recognising AI as socio-technical systems, and use all the toolkit of thinking and acting politically when considering opportunities for participation. We should recognise that AI may be disrupting existing power structure, and traditional elites role may be changed by AI. This can, in some contexts, create space for unusual alliances in support of public voice in the governance of AI.
Our analysis shouldn’t be limited to technology alone: it’s important to explore the organisational and economic restructuring intertwined with potential or extant introduction of artificial intelligence when considering the opportunities and challenges for public engagement.
(2) Build receptiveness to input
We need to think about the people who need to act on public input, and to recognise the experiential aspects of participation.
Giving engineers greater direct contact with affected publics, and seeking to change the way design or deployment decisions are talked about can at times be as important as gathering and summarising public inputs to present to decision makers.
For civil society organisations, a focus on being receptive to input can involve thinking about our adaptive capacity to listen to, act on, and amplify participant voice.
(Aside: this point draws upon our evaluation of Justice Data Matters, and experience of the People’s Panel on AI, which both explored questions of readiness of audiences to listen to and hear from public inputs)
(3) Build community capacity to push for change
As well as making organisations more receptive to public input, we should think about building the capacity of publish to advocate for the change they want to see. Participant journeys should not be one-off, but people should have opportunities to build on informed engagement to take on leadership roles in future. In crisis situations this can be particularly challenging, and it is layered: sometimes about developing the capacity of people with lived experience, but a more stable current context, to be advocates from their own past experience, and for their peers experiencing current crisis.
(Aside: this draws in part on our Community Campaigns on Data work last year, exploring ways to build collective community capacity to advocate for changes in data practices)
(4) Focus on the most affected communities
As Meg Young reminds us here we need to focus attention on the most affected communities first. Drawing on ideas of design from the margins, we should prioritise engagement.
We also need to start from present realities, avoiding the futurists trap that seeks to focus on the ‘revolution’ in AI capability that is perpetually just a year or two away extrapolating from dramatic growth curves. In a development context, we know that theoretical Internet speeds have leapt up over the years… yet billions are still disconnected or have only limited connectivity. Theoretical futures for the few should not dominate over exploration of present realities for many.
At the same time, a focus on communities currently affected by AI is resolutely not a focus on current AI users. Too often engagement activities assume we need to focus on those with direct access to AI systems - rather than to think about those whose realities are being shaped by use of AI: including through distant AI use by humanitarian actors.
(5) Build infrastructure for engagement
Good engagement is not free. There is a cost of good public engagement - but too often there is a reinvention of the wheel and duplication of efforts. We need better sharing of tools and resources to support participatory practice around AI, and better re-use of findings. It needs to be easier to bring public voice around tables like this: it’s striking that there are stories of impact of humanitarian AI that we’re not hearing right now, but that would transform our conversation if they were here.
As wider discussions during the roundtable suggested, having a better map of where AI can, and cannot, support humanitarian action, and shared work to plan and process public inputs to design and governance in specific areas, would be of significant value.
(6) Be strategic
We need to keep in mind that, even with infrastructures that lower the cost of engagement, we will still need to prioritise, focus and think about the kinds of engagement that have most impact.
I’ve been thinking a lot recently about review as an output of participatory processes, whether the kind of outside deliberative review we delivered through the People’s Panel on AI, with it’s performative moment of presenting findings and inviting decision-makers feedback, or giving publics a role in formal peer-review to accept or reject project plans.
In a similar vein, the Participatory AI Research & Practice Symposium explored ideas of participatory audit: seeking to draw on the growing authority of AI audit processes to bring public inputs to bare on substantive decisions.
Critically, in the context of resource constrained humanitarian action, meaningful participation in AI governance may not be about large numbers engaged, but bringing small numbers with relevant experience and support, into the decision making spaces that matter.
Coda
This write-up reflects only the points I made. The rest of the discussion was wonderfully rich with interdisciplinary contributions from humanitarian innovators, technologists, ethicists and leaders from many settings. Thanks to the SAFE AI for bringing the discussion together and inviting Connected by Data inputs. If public notes of wider discussions are made available, I’ll link to them here when available.