AI and Resilience

Jeni Tennison

I was a provocateur an invite-only workshop co-organised by the Bridging Responsible AI Divides Program (BRAID) at the University of Edinburgh and the Ada Lovelace Institute. The purpose of the workshop was to gather together scholars and high-level thought leaders across several disciplines to reflect on the emerging use of the concept of resilience in other sectors and its utility to capture the risks and frame opportunities. This was a half day event, aiming to articulate the uses of resilience and its possible value for future policy and regulatory thinking. The objective was to “think-aloud” about the limits of current thinking and whether resilience can productively extend those limits.

I found the session interesting as an exploration of the different meanings and implications of the term ‘resilience’. It’s often used as concept for containing or managing risks and shocks, but also one applied to longer term strengthening of social and political systems. It’s not just about getting through and bouncing back after a crisis, but about improving over time. Resilience is about maintaining the things we value in the world.

I spoke about resilience at three levels:

Resilient AI

More and more AI systems are provided as services by large tech companies. The recent Update paper from the Competition and Markets Authority analysing the AI Foundation Models market highlights market concentration, driven by exclusive access to data, compute and expertise.

Monopolies form a threat to resilience because they can become single points of failure. We’ve seen knock-on issues to the resilience of small businesses when API service providers suddenly change the way those APIs work, and the kinds of damage to our social, scientific and democratic fabric caused by the disruption of Elon Musk taking over Twitter.

It’s worth noting that AI systems have their own vulnerabilities that could threaten their resilience, ranging from cyber-security threats through to their own dependence on chips, energy, water, and data. The latter is perhaps particularly interesting as the information environment large language models are reliant on becomes polluted with AI-generated content.

In situations where there are natural monopolies, good governance becomes incredibly important for maintaining resilience as it provides risk management and stability. Unfortunately, as recent power struggles at OpenAI illustrate, good governance is not a high priority for some of the new companies that are likely to become dominant AI giants.

We should be taking action to ensure that the AI we rely on is resilient by reducing monopoly power and introducing better governance.

Resilient systems

AI is becoming embeded into the control systems for a range of operational processes, such as logistics, finance, NHS bed management and so on. The way this AI is built has an impact on the resilience of these broader systems.

First, one way of thinking about resilience is the ability of a system to recover from an external shock. This kind of resilience is enhanced through systems having some slack in them. Systems operating at 99% of capacity (or at 99% efficiency) do not have enough slack to withstand shocks, and this leads to the kind of un-resilient breakdowns such as the interruptions to international just-in-time goods trade when the Ever Given was stuck in the Suez Canal. The selling point of AI in operational decision making is to optimise and improve efficiency, but this may have the side effect of making systems that rely on AI less resilient to these kinds of shocks. (This was discussed in Episode 30 of the Disorder podcast: Is there Order in the universe? Or are we all the playthings of chance?)

Second, another way of thinking about resilience is the way systems reinforce existing ways of operating. This is a kind of counter-productive resilience that prevents growth and change and struggles to adapt to new context. AI can easily demonstrate this unhelpful kind of resilience, because it is built on historic data from previous contexts, and can’t adapt easily to new ones. This is illustrated in the racist and sexist biases we’re all familiar with in AI systems; the difficulties in taking health AI from the lab into practical real-world uses; and the challenge of using AI-based predictive models in novel contexts, such as during the Covid pandemic.

We should be optimising AI for resilience rather than efficiency, and for adaptation rather than replication.

Resilient people

The final way of thinking about resilience that I wanted to touch on was about the effects of AI on the resilience of people and communities.

Many people are not able to be resilient, because they are operating “on the edge”. They might be in precarious employment, such as gig work or zero hours contracts. They might be reliant on benefits that may be withdrawn unexpectedly. They may be struggling and then having to cope with rapid changes to the cost of energy or living more generally.

AI is making their lives even more precarious in multiple ways:

  • Jobs are changing to become less secure in general, particularly when hiring and firing is done by algorithm, and there is the persistent threat of job losses.

  • Algorithms are biased, exacerbating inequalities for people who struggle the most, but they are also embedded in bureaucratic systems which are uncaring; treating people with fairness requires an understanding of context, compassion and humanity which automated bureaucracy lacks. The Post Office Horizon scandal, Australian Robodebt scheme and the more recent Carers Allowance scandal illustrate how algorithms can introduce shocks into the lives of people who are least able to cope with them.

  • Increasing digitisation and use of digital services is removing the human-to-human contact that helps us feel connected to and supported by each other. This is exacerbated by case-management systems that force workers to spend less time with people, whether it’s post office workers pausing to chat witih elderly residents, supermarket till workers being replaced by self-checkout machines, or mental health apps taking the place of counsellors, we have less time making these connections.

My contention, therefore, is that AI (and digital technology more generally) is having damaging impacts on our own resilience as people and communities. Undermining our resilience causes damage to our mental and physical health (with counterproductive knock-on impacts on productivity).

We should be pursuing AI that makes us more resilient as people and communities, rather than less.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more