I spoke on the panel: The future of the UK’s digital economy, alongside Bojana Bellamy, John Whittingdale MP, Mahlet Zimata and Cyrus Mewawalla, at the New Statesman and Tech Monitor’s Digital Responsibility Symposium 2022.
The government’s ambition to make post-Brexit Britain a global tech hub could result in the rollback of data protection laws, with the focus turning towards encouraging and enabling technological innovation. What impact will this have on business technology, and those who lead it? Are short-term returns for investors being prioritised over long-term trust in technology? Is GDPR legislation likely to be an early casualty of the government’s strategy – and if so, what can businesses expect in its place?
Here’s a write-up of the notes that I made for the panel.
What would you like to see in the UK’s future data governance regime?
We would love to see communities involved in the practice of data governance.
It’s tough to make data governance decisions.
- It’s difficult to know what kind of wider impacts and harms data and digital systems are going to cause, because organisations have limited visibility and experience.
- It’s difficult to decide what’s in the best interests of people and their communities to meet ESG ambitions because this involves weighing up different interests and values.
- Even if you get to a good set of principles around things like bias and fairness, it’s really hard to decide what terms like “bias” and “fair” really mean in a given context.
Worldwide it’s recognised that we need to put communities and collective decision making about these questions at the heart of the way we do data governance.
There is a real opportunity in the UK to take the lead in this. We could enable, encourage and enforce collective and participatory approaches to data governance by setting the right public policy and regulation. For example, we could, in the upcoming Data Bill, expand legitimate interests to
- enable companies to take into account the wider social and environmental impacts of data processing
- encourage them to consult with the people who are going to be impacted by that use of data
- enforce that they have to make their balancing tests public so we can see their rationale
Data and AI are global, so we can’t regulate or set standards in isolation. In addition, global rules make it a lot easier for businesses, particularly digital businesses, to operate internationally. So we would like the UK to take an international perspective. We would like to see the UK continue its leadership in not just creating a great environment for AI, but for the right kind of AI that speaks to society’s needs. We would like to see the UK not just thinking about data adequacy agreements but having a role in promoting broader international alignment around what good looks like.
How can the UK ensure its data regulator, the ICO, is trusted?
There are soft and hard ways of shaping an trusted and trustworthy environment.
Looking at the hard aspects, we need structures that promote accountability, enforcement of the law, and provide redress. That means we need an effective ICO. The recent work I’ve done comparing food and data regulation shows a huge discrepancy between the way we protect people from harms caused by food, to how we protect people from harms caused by data. In particular, the food system is much more proactive around inspections, whereas the ICO is oriented around providing advice. We have to ask whether the range of enforcement measures the ICO has (oriented around fines) are effective deterrents for organisations.
There is real concern that some of the proposals in the Data: a new direction consultation will undermine the independence of the ICO by making it report to the Secretary of State for DCMS (currently Nadine Dorries).
The ICO isn’t the only organisation that helps people feel protected. Civil society organisations such as Foxglove, who use strategic litigation to protect people’s digital and data rights, are an important part of the environment. So are emerging data institutions such as Worker Info Exchange which help people to collectively understand data held about them.
But the soft things matter too, including tone. Are we putting an emphasis on protecting people, communities and societies. Or is the focus on easing the burden on businesses (which is also important, but isn’t the thing to prioritise if you want people to trust the regulatory environment). With that in mind, the proposals to require the ICO to focus on economic growth undermine its trustworthiness.
What is the role of the public in deciding the rules that govern data flows?
There is an attitude in government that people have it wrong and need to be educated to see the benefits of data. Funnily enough, there’s also an attitude from privacy advocates that people have it wrong and need to be educated to see the harms!
In practice, when organisations run citizen juries, they do provide some education at the start, from different perspectives in order to provide a balanced picture. The result tends to be that people are very positive about uses of data for public benefit, and very sceptical about uses of data for commercial profits.
Data flows get designed by organisations. They choose what to collect, how to use it, who it gets shared with, for what price. There is deep public dissatisfaction about how some of those decisions are panning out and with the organisations that are making them, particularly but not only big tech – this is a pattern across health, finance, energy, and government and pretty consistent across any public attitudes survey or consultation.
Doing data “responsibly” is a value judgement. What is “ethical” and “fair” depends a huge amount on context, and on who you ask. It is risky for organisations to make those decisions on their own:
- If they get it wrong and cause significant harm (even profit-hungry corporations don’t want to do that)
- If they are out of step with public opinion and suffer a backlash and reputational damage
That’s why I think involving the public in decisions about data, at every scale – from public policy decisions down to the granular ones organisations are making every day, is vital. Good process of public participation in decisions is a way for organisations to not only do the right thing, but to be able to show they tried their best to do the right thing. It protects them and reduces the risks of reputational damage.
Is data strategy a priority at C-suite level in the UK and how does it fit in with ESG?
Of course data strategy should be a priority. Data is unavoidable and being conscious about how it’s collected, used and shared is vital for organisations to achieve their goals.
Good data governance is part of good governance – the G part of ESG. It should not be off to one side, but an integral part of being a good organisation as a whole.
Good data governance also involves using and sharing data to advance environmental and social goals – the E and S of ESG. We can see this during the pandemic where Google and telecoms companies shared aggregate mobility data to help understand adherence to lockdown. Or where Mastercard shared aggregate payments data to help understand impact on the high street. Or where Microsoft shared data about real broadband speeds in the US to understand digital inclusion – particularly important during lockdown when schoolkids needed internet access.
Good data governance is as much about making data available to do good as it is protecting it to avoid harms.
How should C-suite executives get ready for this new age of data governance.
C-suite executives should ensure their organisations are talking to people to find out what they think. They can do it informally or formally, but the attitudes of customers, clients and citizens must be part of how decisions are made about data.