What does the Post Office scandal teach us about data and AI regulation?

Jeni Tennison

Jeni Tennison

The Post Office scandal has reached the mainstream, thanks to ITV’s dramatisation, Mr Bates vs The Post Office, broadcast earlier this month. The political response to the harms done to sub-postmasters due to problems with the Post Office’s Horizon system has rightly focused on correcting miscarriages of justice and providing compensation to the people affected. But there are other lessons to learn from this scandal: about how technology can go wrong and the implications for how it’s developed and embedded within wider processes; and about the rights we need to bring such errors to light, correct decisions made about us, and hold organisations accountable.

But even as the government congratulates itself on finally acting to compensate victims and quash convictions, its own Data Protection and Digital Information Bill is laying the groundwork for both making similar scandals more likely in the future, and making it even harder for campaigners to achieve justice.

Technology goes wrong

The Post Office’s initial response to the problems being encountered by subpostmasters in the early 2000s was an assumption that the Horizon software could not be at fault. The courts made a similar assumption, based on 1997 Law Commission guidance which stated that ‘In the absence of evidence to the contrary, the courts will presume that mechanical instruments were in order at the material time… The principle has been applied to such devices as speedometers and traffic lights and in the consultation paper we saw no reason why it should not apply to computers.’ Plainly, this assumption was wrong.

In reality, any data-based system might be faulty. We find out about those that do so at a massive scale and with horrendous damage to people’s lives. For example, in the Australian Robodebt scandal, overpayments to welfare recipients were wrongly calculated, affecting hundreds of thousands of people. In the Dutch childcare benefits scandal, 26,000 parents were falsely accused of fraudulently claiming childcare benefits.

AI is given the same magical aura now as that given to computer systems in the early 2000s, but it is even more prone to error. Code written by humans can at least be inspected to identify bugs. AI systems created through machine learning are more opaque and subject to issues caused by inaccurate or biased training data; under- or over-fitting, which means algorithms tested in the lab can still fail in deployment; and, in the case of generative AI such as ChatGPT, hallucinations that create false, but persuasive, content.

All software is prone to errors, so the way in which it is embedded into wider bureaucratic systems and processes matters hugely. People are damaged when data and AI systems are assumed to be fault free. In the Post Office, subpostmasters paid for discrepancies from their own savings, or were subjected to bullying and criminal prosecutions. In the Robodebt scandal, identified debtors received automated threatening letters and visits from debt collection agencies. People suffered financial losses, reputational damage and mental health impacts. Some took their own lives.

The most important lesson we should take from these scandals is that the social and organisational processes around software and AI systems should never assume their outputs are completely reliable. It is not simply a question of having humans in the loop but demonstrating humanity. Processes wrapped around automated decision-making should operate under an assumption of “innocent until proven guilty” and a core principle of care for those whose lives could be affected. This is particularly important when the targets of AI and data-based decisions are already in precarious positions.

This is why the DWP’s plans to gain permission through the Data Bill to monitor bank accounts to detect fraud are so concerning. The government does not yet have the practices, processes and governance in place to ensure that human-based assessments are carried out with care (look at the damage caused by disability assessment processes over the past 20 years), let alone those carried out by AI. There is nothing in place to prevent DWP automatically suspending payments to benefit (including pension) recipients who are flagged by a fraud detection system which will make mistakes.

Early engagement with the people and communities affected by data and AI systems is essential to identify risks and potential impacts. Diverse voices in the development process help to explore “what if” scenarios and reflect the lived experience of those on the sharp end of automated decision making. Imagine how things might be different had subpostmasters been involved in the development of Horizon, or had been listened to as they reported early problems.

Early consultation around data and AI systems is essential, and encouraged in GDPR, but removed by the Data Bill. GDPR also requires organisations that rely on “legitimate interests” to process data to pause and reflect on whether it’s necessary and the impact it might have on people. Again, the Data Bill undermines these useful checks and balances by creating a list of “recognised legitimate interests” where these processes can be side-stepped. This list – which can be added to by Government Ministers at any time – includes the broad use of data in emergencies, when addressing crime and advancing national security, and during election campaigning.

Accountability requires collective action

A second lesson is the importance of facilitating collective action and the pursuit of redress by those affected by such systems. The victims of the Post Office scandal were repeatedly told that they were the only ones who had encountered the difficulties they were reporting. As individuals, they were unable to fight the organisational might and substantial resources of the Post Office. It was only when they realised they weren’t alone, and formed the Justice for Subpostmasters Alliance, that they were able to wield enough collective power to – eventually – get the Post Office to admit to the errors and clear their names.

Legislative frameworks for data and AI systems must therefore be oriented towards enabling people, groups and communities who are adversely affected by them to find each other, organise, and strike back. Campaigners fighting for justice require transparency about:

  • the existence and use of data and AI systems
  • how they are designed and what data they use
  • identified risks and impacts, including through equality impact monitoring and reporting
  • the complaints and requests for help that have been received

Accessing this information – particularly from the public sector – should not require court orders or official inquiries. It should either be required to be published openly or available on request. And yet, the Government has included clauses in the Data Bill to enable organisations to refuse requests they determine to be “vexatious or excessive”: a recipe for stonewalling that will stymie those seeking justice.

Collective action must also be enabled through collective rights. Most people do not have the capacity or tenacity of the subpostmaster Alan Bates. So legislation, like the Data Bill, that only provides mechanisms for individuals, and not for groups, will fail the majority. In each of the scandals listed earlier, the government has eventually taken on providing redress for everyone affected, not just those who complained. It is right for the onus of identifying those people to be placed on the organisations who caused the damage – who after all have the data to identify them! – rather than on the campaigners representing them.

The need to support collective action against big tech players was recognised in the Online Safety Act, which provides for “super-complaints” by organisations representing individuals and groups harmed online. As ORG has described, it is currently extremely burdensome for representative bodies to take action around data rights. The Data Bill is a missed opportunity to make representative actions easier by bringing Article 80(2) of the GDPR into law.

The Post Office scandal has some stark lessons for how data and AI systems should be integrated into our lives and society. The Government’s Data Bill fails to learn them, and in multiple places makes it even harder for victims of computer errors to get justice.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more