Prioritising AI

Jeni Tennison

Jeni Tennison

I’ve posted recently about the challenge of purpose and priorities in the adoption of AI by the public sector. This blog post expands on this to look not at what I think the priorities should be, but about how they should be decided, and that prioritisation method institutionalised given it’s not a one off exercise.

It’s easy to think “there’s AI now, therefore we must do something with it” and “everyone else is using AI so we should”.

Also easy, when you’ve got this seemingly magical general purpose tool, to think every problem that could be addressed by AI, should be.

(It’s also easy to go the other way, to be ignorant about what data or AI can do, or fixated on all the ways things could go wrong, leading us to miss out on places where it can really make a difference. As this article describes, we shouldn’t be either blind optimists or outright pessimists. We can overcome delivery challenges only if we take them seriously. We fail when they are glossed over, or we don’t learn from past failures and successes.)

Back to priorities: I reflect on years of being asked “which datasets should we open up” (because open data must be useful for something!). It’s so difficult to answer that kind of question, when evidence will always be lacking for something new, and because “it depends”.

I see DSIT is doing something similar to Matt Clifford, asking him to identify the biggest opportunities for AI.

So let’s look at four approaches to answering questions about priorities for AI (including in the public sector):

  1. model it
  2. lean on values
  3. lean on goals and missions
  4. focus on learning

Modelling

Also known as the McKinsey method or multiplying big numbers together to come out with a really big number. Bonus points if you use ChatGPT to do some of the analysis.

The trouble with models is they are reliant on assumptions, including about how the world works, but complex systems rarely respond predictably to interventions; introducing AI frequently has effects we don’t anticipate. For example, it’s not straightforward to get productivity gains from generative AI, because people.

Not that models are bad: modelling specific systems (eg cancer care) can help work out where the bottlenecks are and help identify meaningful measures for AI system efficacy based on whole system goals. (When you only measure localised impact, it can make things worse.)

Also the process of creating a model is also valuable as a way of making assumptions explicit and exploring scenarios.

If I were using models to prioritise, I’d make them with people who think about the world differently, and make them playable so we can all explore “what if”.

Values

While about health not AI, I like how the WHO guidance on evidence-informed decision making makes explicit the range of factors that go into making policy decisions. Not just evidence but also political context, constraints, equity, values etc.

Evidence-informed decision-making (EIDM) emphasizes that decisions should be informed by the best available evidence from research, as well as other factors such as context, public opinion, equity, feasibility of implementation, affordability, sustainability, and acceptability to stakeholders (3–5). It is a systematic and transparent approach that applies structured and replicable methods to identify, appraise and make use of evidence across decision-making processes, including for implementation (4). EIDM adheres to the principles of equity, equality, and accountability (6).

EIDM has its roots in the evidence-based medicine movement and HTAs dating back to the 1980s. It has since expanded beyond clinical care and health systems to include a broader notion of evidence-based policy-making (3,7–9). The more recent emphasis on evidence-informed over evidence-based decision- and policy-making takes into account that research evidence is often but one of several factors influencing policy-making processes (3). As policy-making inherently takes place in a political context, economic interests, institutional constraints, citizen values and stakeholder needs tend to play an important and sometimes conflicting role (4,10,11).

Values matter because we know there are default winners and losers from tech transformation. This article by Greta Byrum & Ruha Benjamin names “powerful investors, industry leaders, elite technologists, and special interests”.

Similarly, as Dan McQuillan describes in his book “Resisting AI”, the default impact of AI is to “amplify existing inequalities and injustices, deepening existing divisions on the way to full-on algorithmic authoritarianism”.

A lot of the discourse about AI adoption in the public sector is technocratic or “politics-blind”. It casts the political choice to prioritise cost cutting efficiencies and private sector innovation as the only option. But this makes those harmful default impacts more likely.

We could make other choices. Could centre humanity, honour diversity and prioritise relationships. It would mean fewer chatbots though. Less focus on automatically identifying benefit fraud. Less automated facial recognition and performance monitoring of public sector workers.

We could draw some values-based red lines, like the EU AI Act has done by listing prohibited systems. We could prioritise investment in lower risk AI and consciously experiment around how to build confidence when developing AI in more contentious areas.

Values aren’t part of policy conversations on AI, because policymakers are listening to a narrow set of voices and AI is wrongly seen as neutral. Including civil society (including non-profits whose mission is to protect people’s rights and interests) would provide some balance.

(By the way, if you find civil society hard to reach, we at Connected by Data help organise the Data and AI Civil Society Network – we’d be delighted to point you in the right direction or help you organise roundtables / workshops.)

More institutionally, public deliberation could have a role here. Deliberative approaches – like our People’s Panel on AI – can surface what matters to real people and map the changing contours of public acceptability of AI.

Missions

This Government has articulated its goals moderately clearly in its five Missions. These give a good starting point for identifying areas where AI might be socially and economically useful and satisfy democratically identified priorities.

There is no “AI Mission” because AI is a (potential) means, not the ends. Complex public sector problems require multi-pronged interventions. AI priorities need to work in concert with them for maximum impact, not be decided independently in a central “AI Mission Control”.

Mission leads should see AI as part of their toolbox, so we have to embed AI adoption into Mission-delivery machinery.

IIPP and the Future Governance Forum suggest governance around Government’s Missions that incorporates technologists to spot AI opportunities. Similarly, NESTA and the Institute for Government have “Data and technology” as a foundational piece of Mission-driven government.

Good Mission delivery requires multiple perspectives. Do you begin to see a pattern here? Prioritisation that includes a diversity of voice will help make AI more successful: more grounded in reality, more focused on real problems, more able to anticipate and mitigate harms.

Learning

Fundamentally, we don’t know where AI will have most impact, because lots of AI is new and because context and details of what kind of AI is used in what way really make a difference. We need structures that accelerate distributed learning about what works.

The existing cross-government AI community is part of this. Communities of practice can be effective whether or not you have a more concentrated “centre of expertise” – in my opinion more effective for learning and sustainable distributed transformation.

I’d also draw on Dave Snowden’s “Managing complexity (and chaos) in times of crisis”, which includes a bunch of concrete recommendations based on the Cynefin framework around how to set up these learning structures in complex, changing systems.

This includes using the expertise of public sector workers, members of the public, and civil society groups. Co-design with them can prevent costly mistakes. After deployment, they are the ‘canaries in the coal mine’, part of an extended sensor network for AI impacts that help us detect and fix problems more quickly.

Talking of communities of practice, we support one that operates across the public sector (central and local) to help us all learn how to practically include these different voices in the development of data and AI systems. Do get in touch to join.

In conclusion, there are lots of ways to make decisions about priorities for AI. They all work better when they include diverse voices. We should be looking to institutionalise ways of decision making that don’t leave it all up to AI boffins and fans.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more