AWO Report: Does the law allow non data subjects to challenge algorithmic harms?

Jeni Tennison

Jeni Tennison

Jeni Tennison

The need for collective action to counter harms caused by AI and algorithms processing non-personal data is one factor that has led to calls for improved collective data governance, underpinned by group privacy and collective data rights.

Much of the literature on these rights remains abstract, and there are counterarguments to the introduction of collective data rights in law. Asaf Lubin has argued that there’s a risk that collective data rights could lead to “unjust collectives” that prioritise the rights and interests of the majority over those of the minority. Others point to the existing set of legal tools available to people and groups harmed by data and algorithms, such as equality, public administration, consumer protection or employment law, and ask whether additional rights arising from the use of data or algorithms are really necessary.

We wanted to shed light on the need for collective data rights by examining legal remedies currently available in the UK in three scenarios where the people affected by algorithmic decision making are not data subjects. Our goal was to identify whether and where new collective data rights might be needed to avoid or gain redress for collective data harms, and hence to inform future regulation of data and AI.

As part of this work, we commissioned the law firm AWO to conduct a legal analysis of three hypothetical scenarios where automated decisions may be made based on non-personal data:

  • a police force using historic crime data to determine patrol allocations in ways that increase Stop and Search use in over-policed neighbourhoods
  • a train operating company using algorithms based on historic data to determine surge prices in ways that disadvantage consumers
  • a social media company removing legitimate content in ways that undermine the free expression rights of those interested in LGBTQ+ or non-English content

They found there is less protection for people harmed by ADM when they are not data subjects; gaps when harms are indirect and diffuse, such as arising out of biases in ADM; and significant barriers to access to justice through equalities or sectoral legislation due to low transparency of automated data processing and constraints on who can bring claims for algorithmic harms.

Their full report is linked below; we also hosted a Connected Conversation on the topic with further discussion in September 2023.

 Read more

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more