The last couple of weeks have really been focused on our response to the Data Protection and Digital Information Bill, and starting to coordinate with other people and organisations who have Opinions about it.
Policy positioning
I wrote a bit about our main positions in my previous weeknotes, which we’ve refined over the last couple of weeks. We want the Bill to reflect the collective impacts and risks of modern data processing:
- to ensure risk assessments, balancing tests, the design of codes of conduct and other decisions about data factor in collective, societal, equality and environmental impacts
- to protect the rights of the direct, indirect and group subjects of automated decision making
- to increase the democratic and consumer accountability of decisions about data through
- better transparency of the decision making process and outcome
- greater participation by those affected by those decisions
- better collective complaint and redress mechanisms
Decision subjects
I’ve been digging into the second of these points – protecting the rights of the direct, indirect and group subjects of automated decision making – in a bit more depth.
Initially, I was framing this in terms of “decision subjects” rather than “data subjects”. Then I went through the Bill to identify all the places where there are references to “data subjects” that should really be about “decision subjects”. It’s fairly easy to make the distinction:
- The rights, interests and freedoms of data subjects are related to what data is collected and stored about them, and protections around the security and sharing of that information, to protect their privacy.
- The rights, interests and freedoms of decision subjects are related to the ways in which that data is processed and the impacts those decisions have (eg on the prices they’re offered, the search results they get shown, how long they’re imprisoned for, how they are targeted for interventions).
A lot of the rights of decision subjects are protected through other pieces of legislation: the Equality Act 2010, Human Rights Act 1998, Consumer Rights Act 2015 and so on. The challenge the Data Protection and Digital Information Bill (and GDPR) tries to address is how data can be used in these decisions, including to profile people, and particularly how those decisions can be automated.
This is the place where GDPR gets things wrong (in my opinion). It operates under the assumption that the only people who are affected by data-based or automated decision making are data subjects. But (while frequently decision subjects are data subjects too) there are situations where this isn’t the case. Automated decisions may be made:
- based on non-sensitive data about a decision subject (such as a postcode), combined with sensitive (special category) data about other people (such as their race) – this is relevant because there are particular restrictions on automated decisions using special category data, such as the need to gain consent for them
- in the absence of personal data about the decision subject, for example if they are not logged in to a website – decisions may be made based on assumptions based on other users, or the fact that data is missing about someone might itself influence the decision
- about geographic areas, properties, facilities, public transportation, websites and so on – perhaps based on personal data about some of the people who live in, visit, or use those real or virtual places – that then affect the other people in those groups
Some scenarios to make this concrete:
- A profiling company uses data about people’s mental health (special category data) to construct a model that predicts mental health conditions based on people’s social media feeds, and from that gives an estimate of how much time people are likely to take off work. A recruitment agency uses this model to perform due diligence over candidates and weed out those who are likely to have extended absences. The recruitment agency never uses any special category data about the candidates directly.
- A person who has locked down their web browser such that it doesn’t retain tracking cookies or share information such as their location visits an online service. The online service has collected data about the purchasing patterns of anonymous users, and knows they are willing to pay more for the service, so provides a personalised price on that basis. No personal data about the purchaser is used in determining the price they are offered.
- An electricity company gets data from the subset of their customers who have smart devices in their home about the details of their home energy consumption. Based on this data, they automatically adjust the times of day when they offer cheaper tariffs. Everyone who uses the electricity company is affected.
It does feel a little like splitting hairs, but to me it’s clear that decision subject should have the rights and safeguards currently accorded to data subjects.
Defensive positions
A lot of what we’re focused on are changes that enhance people’s rights beyond those that currently exist in the UK GDPR and Data Protection Act 2018. But of course the Bill is also removing rights and freedoms in a bunch of places, so there need to be defensive positions too.
The two areas we’ve identified so far are:
- In Clause 17(3)(i), the Bill removes UK GDPR Article 35(9) which encourages data controllers to “seek the views” of data subjects or their representatives during the creation of data protection impact assessments. This is one of the few areas where there’s explicit encouragement for more participatory data governance, so we’d like to see it enhanced rather than removed.
- Clause 5(2) and Annex 1 introduce the notion of “recognised legitimate interests”, which are types of legal bases that would be classified as legitimate interests, but the democratic process / Secretary of State has decided it’s ok. This means they fall outside UK GDPR Article 6(4) which would otherwise mean that data controllers have to factor in a range of things including the possible consequences on data subjects (aka carry out a balancing test). It’s pretty clear the introduction of recognised legitimate interests removes a bunch of controls and restrictions on data processing that are necessary for balancing different interests. Again, we want to see these enhanced, including with consideration for collective harms and impacts, rather than reduced.
I’ll be looking for further examples as we do more work around this.
Coalition building and campaign planning
The publication of the Data Protection and Digital Information Bill has been a good prompt to start building more of a coalition around this work. We started off this last week with a conversation with a small group of friends to explore both where there are overlaps in asks and the appetite for more concrete activity around it.
I think it’s fair to say that people didn’t think much of the Bill, but also fairly sceptical about the ability to make changes to it. There are many bigger fish for backbenchers and the opposition to fry at the moment, and a notable lack of interest or vision from any side of the House. But there’s also a lot of uncertainty, with a new Prime Minister being elected on the day the Bill goes to its second reading, and the likelihood of a new DCMS Secretary of State and potentially new ministers, which in some ways might leave openings for change.
With Jonathan on leave over the next three weeks, Gavin and Renate are going to be helping us out with laying some of the groundwork for responding to the next stages of the Bill’s passage through the Commons and Lords, including arranging a broader meetup in September of others who are intending to campaign around the Bill. If you’re interested, ping us on Twitter or drop me a line.
That’s it for now. But do read Jonathan’s weeknotes about developing our narrative as well!