Ensuring People Have a Say in Future Data Governance

Gavin Freeguard

On 5th December 2022, CONNECTED BY DATA organised an event in parliament, hosted and chaired by Lord Tim Clement-Jones, to explore three key areas around the future of data governance: automated decision-making, data at work and data in schools.

These are all areas that could be affected by the Data Protection and Digital Information Bill, expected to return to parliament for its second reading at some point in 2023. We think the Bill represents an opportunity to influence how data is governed in a more democratic and participatory way, but worry that – in its present form – it undermines existing safeguards and misses the chance to extend democratic data governance.

The three areas under discussion also represent domains where growing data collection and use could have both significant benefits and harms in the future, regardless of what happens to the Bill. The event invited opening contributions from civil society and academic experts on each topic before opening up to a wider discussion. The experts were on the record unless they requested otherwise, with everyone else being unattributed under the Chatham House Rule.

Introduction

Lord Tim Clement-Jones welcomed the audience by saying the purpose of the event was not only to explore and raise concerns about the Data Protection and Digital Information Bill, but also to think beyond it (including on the governance of artificial intelligence, currently the subject of an inquiry by the Commons science and technology committee). He noted that public interest and concern around data had been raised to a higher level during the pandemic, through discussions around contact tracing apps and the GPDPR scheme for using people’s health data.

Gavin Freeguard, Policy Associate at CONNECTED BY DATA, gave a quick introduction to the Bill, from the National Data Strategy to the Data: A New Direction consultation to its introduction into parliament. He also summarised a civil society workshop organised by CONNECTED BY DATA in September where attendees raised several concerns about the Bill – from the risks to data adequacy and business, to worries about the future of the Information Commissioner’s Office and the powers granted to the secretary of state.

Automated decision-making

Dr Natalie Byrom, Research Director of the Legal Education Foundation, explained the impacts of automated decision-making in human terms, highlighting a case in the Netherlands where a mother was falsely accused of benefit fraud because of an algorithm and the toll taken in clearing her name. The current pressure on public spending in the UK means more government organisations could turn to automated systems.

Automated systems should operate within the rule of law, specifically:

  • Transparency, for decision subjects, in procurement and of operating systems
  • Equality, with positive duties to promote equality
  • Accountability, although existing mechanisms for seeking redress are complex and prohibitively expensive to access.

The Bill currently falls short on these measures.

Dr Jeni Tennison, Executive Director of CONNECTED BY DATA, said that the Bill is where we can make a difference in the short-term to how automated decision-making affects our lives – there is no other current vehicle. Having a ‘human in the loop’ – ensuring a person is involved in an algorithmic decision-making process – is too passive. There are particular risks associated with the use of automated decision-making in the public sector (for example, decisions may be about access to public services or benefits which the citizen cannot opt out of) – we should therefore ensure that the public sector acts above and beyond the private sector.

There are three tests we should apply to measures around automated decision-making:

  • They should recognize the effect on groups, communities and society as a whole, not just individuals. It is extremely difficult for individuals to detect the systemic impacts of automated decision-making – which can be massive – from an individual perspective. This should also acknowledge that it is not just personal data that can affect people’s lives
  • They should ensure rights for decision subjects, not just data subjects. Data subjects are the people that personal data is about; decision subjects are those affected by decisions using data. (Algorithms do not need to know much about us and can base decisions based on data about people it thinks resemble us – for example, the A Level algorithm fiasco.)
  • Provisions should have teeth. Is the monitoring and transparency effective? What are the requirements of those using an automated system? Can people easily seek redress (too often this is only possible once harm has been done)? Have people and communities been involved in the design of these systems?

Without this, there is a real risk of harm and reducing trust – which, in turn, reduces the adoption of such technologies as people are less happy to accept future uses of them.

Discussion followed on several topics:

  • Should the law start from scratch – are we trying to shoehorn the issues into existing law? Natalie replied that we do need a new emphasis in law, which is currently too reliant on individuals having to identify harms that have happened – the current Bill is an opportunity to change that. Jeni underlined that the focus on personal data distracts us from the need to focus on decisions and their collective impacts; Natalie felt there was also a risk of focusing on principles to promote innovation when these values might put vulnerable people at risk. One attendee noted that the use of automated decision-making might affect those we do not traditionally think of as vulnerable
  • Another attendee put forward a point of view from consumers, including their concerns about the Secretary of State’s powers and that there were some woolly definitions in the Bill – partially automated decisions might not be covered, while ‘significant decisions’ were open to interpretation. Jeni replied that people (including consumers) should be able to be represented by third party organisations, class-action style, and that proactive publication of the impact of algorithms (e.g. Algorithmic Impact Assessments) would be important for tackling impacts on groups. Natalie noted that redress (e.g. for groups) would need to be at a sufficient scale to alter or deter behaviour
  • A final question covered the skills gap required for much of what was discussed, including around the procurement of algorithmic systems, and that there were already challenges in getting the right skills in the public sector due to the pay gap with the private sector. Natalie mentioned the work of Bianca Wylie on professional duties and whether such duties could apply around ethics to procurement and development professionals. Jeni thought it would be good to see more auditing skills around the use of data and automated decision making in the National Audit Office, and this could promote rigour more widely.

Data at work

Andrew Pakes, Deputy General Secretary at Prospect Union, was worried about the ‘deafening silence’ around the nature of work and how it fits into this conversation – and that the specific nature of the work relationship needs to be recognised. If data is power, then the contractual nature of work and traditional imbalances of power can be transformed into something more complex when data is involved, and imbalances can become further entrenched. Workers should have a right to consultation where new technology is being deployed – if such measures are voluntary, they will not happen.

There are serious questions of accountability and liability. Many enterprises are not big enough to develop their own algorithms and will often import them from elsewhere, leaving a big question about where liability and risk lies. Concepts of managing risk and reducing harm in the workplace are not new – in the earlier industrial revolution, an organisation might not know the finer details of how a furnace worked but it was clear they would be accountable and liable for it if something went wrong – so should not be pushed aside as ‘novel’ around data and AI. There should be clear employer accountability and liability, in a similar way to how health and safety works.

Conversations around making things better for business tend not to think about human rights and this is true of the Bill, which waters down some existing commitments and leaves others absent. Data Protection Impact Assessments (which the Bill would not require) are a tool that employers should be forced to use in discussion on the deployment of new technology with employees and their representatives. Without such tools, a lot of discretion is left to individual organisations and that is worrying. The risks to employees when data is used in the workplace mean there needs to be a dedicated space for conversations around data that explore the workplace and employment power relationships – the Bill doesn’t do that. The Bill is also often silent on collective impacts – around equality in general, but also specifically around employment and where automated decisions are made on hiring, firing and performance. The Bill diverges from principles currently shared with Europe, where several countries are actively and positively exploring these issues in the work place, rather than building together.

Another speaker also felt the Bill needed to recognise work in its own right, as a high stakes environment that affects our everyday lives and where many people increasingly come into contact with data- and AI-driven systems. Forefronting and making explicit the development of a trustworthy and responsible ecosystem around them is essential for maximising positive impacts from new technology. Investment in and adoption of AI is increasing – there is a huge political push for it, too, and regulation is very light-touch – even though its impacts are not always clear. Employees are reporting changes but are often not consulted until very late.

There are strategic questions around the Bill – how far should we go in trying to amend it, rather than leaving it to an overarching framework or another forum? Even though there are other levers – the Equality and Human Rights Commission, Information Commissioner’s Office, and initiatives around AI like the new AI Standards Hub – the Bill provides an opportunity to think about underlying principles for data protection and digital information (which could help align with others internationally) and challenge some specifics. There are gaps in thinking about equality impacts and widening the powers for the Secretary of State without really saying why.

We should challenge:

  • the Bill’s removal of article 35, subsection 9, which is the obligation to consult with workers and their representatives
  • the removal of the balancing test around legitimate interests – whether the legitimate interest in using someone’s data overrides their fundamental rights
  • the reduction of access to information, which is already limited but narrowed further with reforms to Subject Access Requests)
  • protections not being extended to semi-automated decision making.

We could add new rights through an annex or schedule, including:

  • individual and collective rights to information about automated and data-driven systems, including purpose and remit. Collective rights are vital since impacts will be collective, cannot always be understood by an individual, and will require group or collective access with new and novel roles for bodies such as unions
  • a new pre-emptive duty to assess impacts not just on equality, but also work. The proposed reforms to Data Protection Impact Assessments could provide an opportunity for discussion.
  • Identifying and establishing principles in areas like access, equity, participation.

Questions and further discussion covered the following:

  • A strong case had been made for the deployment of new technologies being context, but should there be any red lines – circumstances where data and AI should never be used? Facial recognition and monitoring people when working at home might be one. Biometrics might be another, as might the use of AI in experimental ways not relevant to the issue at hand. High level principles and agreements about when new technology should be used should be established, since that was more of an issue than specific technologies – for example, there are work scenarios where GPS tracking of workers can be important for health and safety, but it should never be used in performance management. There could also be some red lines on transparency around key aspects of systems and their impacts, with regulators have powers to certify, suspend or investigate as forms of redress
  • Language can be used too loosely, such as ‘human in the loop’ – which humans? It is very different whether it is a manager that can impact an algorithmic process or the workers affected by it
  • There are proven economic arguments about the benefits of increasing trust and involving workers in the production, deployment and use of new technology – this is well established elsewhere but missing the in the UK debate
  • One data protection expert noted that many clients seek to deploy established products from other jurisdictions (particularly the USA and Asia) and want to know how they interact with UK GDPR – often by asking what the bare minimum they can get away with is. People in these decision-making positions are generally from ‘advantaged’ demographic groups, and focus on innovation without seeing the risks – ethics and safe business practice are often seen as ‘nice to have’ rather than core, with their business models built on revenues rather than rights. There are no business incentives to do the right thing and care about human rights because data is so monetizable. Another participant suggested that statutory codes of practice from the regulator might provide guide rails for organisations to do the right thing, but oversight of the whole of such processes needs to be end to end in order to work – for example, a process of supervision where third parties can work with the regulator to conduct independent audits. We also need to educate as well as incentivise employers to take responsibility for using data- and AI-driven systems – at present, some data protection officers are not up to the task
  • Difficult questions need to be picked up in this Bill otherwise they will get left out of the next one
  • AI is a continuous learning system and we should consider ethics and much else besides at the development stage as much as in deployment – tools like algorithmic impact assessments should be deployed at the earlier design stages. Focusing on the creation, not just outcomes of such systems allows us to design a better set of outcomes and better manage the regulation of them.

Data in schools

Professor Sonia Livingstone of the LSE said a key question being considered by 5 Rights’ Digital Futures Commission was ‘what would “good” look like for children in a digital world?’ The issue of data being collected about children, particularly in education, is still a bit under the radar despite the particular position that children have in relation to data and exercising their data rights. Children use all kinds of ‘Edtech’ (e.g. Google Classroom) which collects all kinds of data from them – on learning, assessment, behaviour management, safeguarding. Children have no choice in the data collected from them by edtech: DfE has no inventory of Edtech use,data protection isn’t sufficiently implemented by ICO in relation to Edtech, and it is hard to understand where children’s data is going or how it is used. We should be concerned not only at how data is being used in schools but how it is entering into and being used in the commercial data ecosystem and what the consequences are going to be for children down the line.

Schools are in an ‘invidious’ position: they are the data controllers, with edtech platforms the data processors, even though schools have few resources to understand what edtech is doing with children’s data. There can be ambiguity in how data protection laws and principles should be applied, as the school is the intermediary between the edtech platform and its students (and their parents). There is an asymmetry in the relationship between schools and edtech platforms, with schools pretty powerless and unguided on how to contain the flow of data (and there can be changes in whether edtech platforms are the processor or controller as teachers and students conduct particular journeys, e.g. clicking a link from Google Classroom to YouTube). It can then fall on students and parents to be responsible for themselves, which is impossible and very unclear when it comes to seeking redress.

There are big questions about the extent to which commercial research and development is being conducted on children while they are learning. Some changes to the Bill could help:

  • Data Protection Impact Assessments could be done by the companies or the Department for Education rather than schools, especially when DfE has instructed schools to use particular edtech platforms
  • The role of Data Protection Officer needs strengthening, not weakening
  • The Government needs to hold these companies to account far more. For example, the Netherlands has renegotiated its ‘relationship’ with Google, and other governments have been clear (Denmark and Germany have banned some platforms) that the use of children’s data in the ways described above is not acceptable.
  • Putting particular importance on ensuring protections for children as data subjects is also vital

Jen Persson, Director of Defend Digital Me, picked up the question posed by the event title – how do we ensure people have a powerful say in how data and AI is used? – in the context of education, and summarised the state of public data used in schools and education. We have to understand what ‘engagement’ means: there has been a decade of engagement with the public and young people, and lots of high quality research asking young people what they want to be done with their data, but this has been widely ignored. Their key concerns are around the misrepresentation of their lives -when data is used to do things to them, not for them, and without their involvement, they lose agency and control over their own lives. The National Data Strategy does not represent children accurately – they are seen either as vulnerable innocents or criminals in most policies, with nothing in between. Their voices are left out.

We need to map the landscape, understand what is wrong today, and ask where lessons have been learned from the history of data protection. Statutory guidance – from DfE on data protection and the rights of the child, or on safety and technology – is not good enough and even woeful in places. The Bill fails to acknowledge that data touches almost everything that happens in a school. Much of it is passed on to the state or edtech providers, but very little of this is communicated to parents or students. The ICO audit of DfE (2020) highlighted several weaknesses – not respecting basic data protection principles in stewarding the National Pupil Database, not keeping records of processing, not being able to demonstrate compliance or fair processing – which the Bill reduces or removes the need to tackle, even though existing standards are not being met.

The big issues include:

  • Accuracy: not all data is equal. Education data is more often opinion and inferred data than facts about personal characteristics provided by families.
  • Policy that does not support good data practice,
  • Lack of training for teachers and students
  • Repurposing.

Repurposing might be the biggest, with so much data collected in schools being passed on to third parties and data protection officers not having oversight of the data ‘daisy chain’ of who this data is passed on to – from local authorities, to police, researchers, integrated with health data and then sometimes even into predictive risk systems. Data protection officers have no ability to look forward to what will happen due to the Bill. With no oversight, children are being failed.

Data uses are already highly invasive, and there is a direction of travel towards tracking emotions and moods for mental health prediction. We should look at lessons from where data protection has not worked – such as immigration, police data on gangs or education data being used by a gambling company for age verification – to inform the future. Education data is highly sensitive, not only about learning but on issues such as mental health or at-risk children, and is widely distributed across the public sector.

Discussion touched on several points:

  • Universities can often use school data to exclude students – there is a wide disregard for the idea that data belongs to students and should be used in their interests
  • There is a crossover between data in schools and data at work – teachers are being monitored, as well as students (with technology running on their computers over a summer break). This is an education and an employment issue. We should also reject the notion that if only schools explained things around data use better, the problem would be solved – it would not be, and schools are in a tricky situation in terms of the platforms they are expected to use. It is almost impossible for schools to deal with these challenges without outside support – they need clearer guidance and recommendations (which Scotland is better at)
  • What can we learn from the fact there has been a lot of engagement but lessons have not been learned? We know that there is support for public interest research with this data if people know what it is being used for – people do not want datasets being combined in data lakes, and there is concern about data being put in commercial hands. Predictive mental health should be a red line, as should the use of algorithms in children’s social care, which did not work and was banned in the US.

Close

Lord Clement Jones thanked attendees and hoped that we could increase the number of people taking this discussion seriously. He noted there were some other opportunities beyond the Bill – such as the inquiry into AI Governance – and even if amendments were unsuccessful, the Bill was an important vehicle for debate. It wasn’t yet clear if approaches to the use of data and AI would be limited to a sector-specific approach, or – as this discussion had shown the need for – a more horizontal approach as well, with the common factors and values requiring some common principles to underpin everything.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more