On Tuesday 25 June, Connected by Data hosted a Question Time-style event on a possible Labour government’s approach to data and AI in public services. You can watch recording of the event on our YouTube channel.
[The Data and AI Civil Society Network also organised a political hustings later that evening, which you can watch on YouTube.]
Connected by Data’s Adam Cantwell-Corn, introducing the event, noted these are questions not just of technology, but of values, politics and perspectives.
The chair of the event, John Thornhill, innovation editor of the FT, noted the title of the discussion makes the heroic assumption Labour will win the election – but hoped it would be relevant to whoever wins. Although it had not been a high profile topic during the campaign, it was an important one – there isn’t just one way of adopting tech in the public sector and there are lots of voices seeking to influence it. So who should the next government listen to, and how should it proceed?
How to protect digital human rights?
Our first question was from Miriam Jones at Liberty, who are particularly concerned about the use of AI in policing. It seems that the use of AI is speeding up and not letting up. Rather than introducing non-policing solutions to social issues that work with communities and are known to work, the government is doing the opposite and further supplementing bad policy and dodgy technology with hallucinating AI. She wondered if some uses of AI should be banned outright, and asked what government needs to do on rights and redress?
Caroline Selman, Public Law Project, underlined the need for transparency, which underpins other rights and safeguards. We don’t know how automated decision making is being used in government – mandating the Algorithmic Transparency Recording Standard would help. She noted that public law is about ensuring fair, lawful and non-discriminatory decisions – that doesn’t change because computers are involved, but it does need to adapt.
Jeni Tennison, Connected by Data, was also worried about a ‘gung ho’ approach to AI without considering consequences. When systems are rolled out in big bureaucratic systems with no consideration of people on the sharp end, bad things happen – we need better engagement with the public and stakeholders from an early stage, thinking about impacts on communities (equality, the environment, relationship between citizen and state) as well as reporting, monitoring and redress. She was concerned by a recent Tony Blair Institute/Faculty report that suggested only citizen controls over privacy, and that bias could be coded out.
Jeegar Kakkad, Tony Blair Institute, welcomed Jeni’s challenge to their report. He thought there were a couple of ways of addressing challenges around the use of automated decision making. The first was adhering to the PEARS framework – that anything being designed that looked like automated decision-making should meet criteria around Predictability, Explainability, Accountability, Reversibility and Sensitivity. If an algorithm couldn’t satisfy this, it shouldn’t be used. The second was digital ID – not in a ‘done by the Home Office’ way, but something decentralised under the control of an individual, giving transparency and a ‘snail trail’ of how their data had been used, allowing them to opt in and out of what was being shared with whom.
James Plunkett, Nesta, was also concerned about the ‘egregious, reckless’ uses of AI, and that adopting them foolishly could make inhumane bureaucratic systems even more inhumane – he added Robodebt to Horizon as a recent example. But he was concerned that we might not be making enough use of these technologies for good and spend all of our time talking about the risks. Progressives, in particular, are now in a place of deep scepticism – as opposed to the early 2000s and the early internet, where there was much more excitement.
Discussion focused on whether any useful system could tick all five boxes on the PEARS framework, and how the deficiency of relying only on transparency, redress and developers or procurers of systems saying everything was fine should lead to more upstream thinking, a democratic mandate and consultation with the public on decisions to use AI.
In the Zoom chat, there was an active discussion about how we should upskill ministers, MPs and civil servants on AI and automation, so they are aware of what they are (and are not) capable of, what they are reliant on (such as good data), and how to be ‘informed, confident and critical commissioners and service designers’. One commenter thought there was a risk of ministers being ‘swayed by hype and sales patter by big tech firms’ who have more resources than other voices (including civil society) - this also prompted a comment about public understanding, and whether citizens know enough to properly advocate for their rights.
How can digital services can be human and inclusive?
The second question came from Margaret, a member of the People’s Panel on AI. She spoke of conversations she’d had with older people who were worried about losing face-to-face contact and friendly interaction as services moved online. She asked how much local government, care services and doctors’ surgeries should adopt digital tools and how to avoid leaving the elderly behind. (Another People’s Panel participant added in the chat that people they had spoken to were ‘scared that there’ll be more isolation, loneliness and everything will be a lonely world “run by robots”’.)
James, who has been thinking about local government digital, picked up on Margaret’s use of the word ‘humanise’ and noted the aim of these technologies in public services should be to make them both more technically adept and more human. Drawing on his experience at Citizen’s Advice, we need to think about the trade-offs – we often think of this as more digitally enabled services meaning less accessibility for certain groups, but actually it was sometimes the face-to-face services that were least accessible as there were no free slots – tech could be used to free up time for advisers. We need specificity in the debate – the key will be in the detail as to whether investment in tech makes services more or less accessible. In the chat, James also suggested introducing concurrency for regulators on equalities obligations (something that already happens in competition law), which would mean regulators had an explicit mandate to stop discriminatory uses of technologies in their sectors (e.g. the FCA in finance).
John asked Jeni whether the public engagement she had suggested, listening to everyone’s concerns, would be expensive, cumbersome, overburdening and ‘digital NIMBYism’. She said these are the concerns people often have – but we could think of it as extended user needs research, really understanding the kinds of interactions people want with public services. We shouldn’t assume that will always be digital, and need to keep mechanisms that allow people to talk to people. It’s only by understanding the full needs of people that we can design public services for them – and public services have to serve everyone. We also need to involve public servants in design – they need to have fulfilling jobs, too (and recent work from Connected by Data and the TUC found public sector workers are often excluded from this). Ultimately engaging people in this design will lead to smoother adoption of better services, saving money in the long run.
Jeegar said that, at TBI, when thinking about how to modernise public services they start with the digital first. Jeni’s points on user need are really important. In looking for international examples of how government is tackling this, the best conversation has been with Singapore, who are at the forefront of digitising public services. They ask where do we need to insert friction – humans – because that makes sense for the user? He noted the importance of community hubs in that system, and in Margaret’s question about closure of branches – you need that in local communities to anchor human-first options, so people can choose between that and remote options. But you should start digital first.
Caroline raised several complexities around digital exclusion. Research on the digitisation of the courts service shows we have to recognise people needing wider support, such as legal advice or emotional support. People are comfortable using digital tools in some contexts – such as online shopping – but not in others. Digital literacy and digital exclusion come in different forms – in Universal Credit and social security, which are digital by default, exclusion comes from them not being designed for the reality of people’s lives. Age is a really important part of exclusion – but not the only one. (A Promising Trouble piece shared in the chat also covered this.)
Jeni underlined how the People’s Panel on AI showed that the assumption that ‘normal people’ don’t understand and can’t have a say around tech is wrong – people are able to say what they want from their lives and the role of technology within that. We should also recognise that things like the NHS App or GOV.UK are centralised notions, and we should pay more attention to local government supplying services and needing flexibility around the communities they serve.
In the chat, there was amazement that the government hadn’t published a digital inclusion strategy since 2014, though many other countries (including Chile, Portugal, Greece and Canada) have done a lot of work to blend the digital and analogue. A recent report from Promising Trouble proposed to make digital services inclusive by design, while a recent Demos report looked at humanising and bringing together government services at a local level. James noted that the digitisation of Universal Credit saved the system during the pandemic, allowing it to cope with a surge in caseload: someone else questioned this, with the digitally excluded still missing out, but Citizens Advice supported people to apply in person and digitisation freed up DWP capacity to deal with it. There was a also a quote: ‘if you digitise a broken service, you just get a digital broken service’.
Who should access our data in our NHS?
The third question came from Nicola Hamilton, head of Understanding Patient Data. She noted widespread agreement on the value of using health data – for care, planning, evaluating services, research and innovation – and recent investments in projects like the Federated Data Platform and Secure Data Environments. But she also highlighted the importance of trust – though trust in NHS data is generally high, there have been some signs of a decline, and failed projects (like care.data and General Practice Data for Planning and Research) and the involvement of big tech have increased concerns. Who gets access to health data under what terms?
Jeegar highlighted a recent TBI report on health data, and how they see two elements to the health data story. The first is for patients to have an electronic health record, data all in one place with the patient having control over what they share, with whom, for what purpose. The second is creating a National Data Trust, building on the Secure Data Environment approach, which could also take advantage of health data for medical discovery. This would be majority owned by government and partly by the private sector, with access determined for trusted researchers for medical discovery and clinical trials using anonymised data. Accelerating access to anonymised health data is the best way to support patient care, allow the NHS to be sustainable and support medical discovery. They understand the importance of trust, and are working with respected experts including their critics, starting by understanding the current ecosystem and what data is available.
James was asked about the National Data Library – proposed in the Labour manifesto – particularly given his thinking about new institutions. He thought there might be a minimalist version – renovating previous work on Registers and single sources of truth, making open data and national statistics more accessible – but hoped for a more ambitious, ‘gamechanging’ version towards data trusts, combining less conventional data sources (satellite data, consumer data) and making it safely available for researchers and innovators. He hoped we would get to shape it towards the boldest version. (In the chat, he also wondered whether such an organisation might be more effective at a ‘mission’ level - eg on Net Zero - and thought there was real potential around civil society data.)
Jeni noted there is currently a whole bunch of initiatives to make data more accessible – the Integrated Data Service, bodies like HDR UK, the Federated Data Platform, some of which are unclear. One thing bothering her around some of these ideas is that the fundamental purpose of the system really affects how you design them, including governance and the implications for public trust. In some of these proposals the purposes are intermingled. The Onward version and the TBI National Data Trust are there to unlock innovation and support the private sector; the Labour manifesto proposal feels oriented towards helping public services do better. The public is very into the idea of getting public benefit out of public data, and more sceptical about private profit out of public data. She worried that several of these initiatives, with no clarity of purpose, and no governance in place would lead to patchier data through people opting out, and even opting to have less engagement with public services because of fears about how data collected by them might be used. Labour need clarity on purpose and to include the public in the governance as a matter of course, so people affected can help work out details. Building trust will require blocking of access to data in some cases. We know there are already enough barriers to data sharing – if we don’t keep the public in the loop and onside through democratic control, she worried about the backlash, which would reverberate widely across sectors.
Caroline was also concerned about such data would be fed back into decision making – and on a strategic, policy level, not just for individual cases. Implicit throughout is data being passed to private companies to innovate. The Bridges court case, on the Public Sector Equality Duty, was clear that a public body still has responsibility and accountability when giving data to private companies. We need a positive focus on how data can be used for good – but also to understand the evidence base on how it impacts people, and knowing the risks.
Jeegar picked up Jeni’s concern on public confidence, saying that the most direct way to do that is for individuals to know what data is being collected and used (through a digital ID system), giving them control over who sees it and in what context. Jeni strongly argued that we need to move away from thinking individual consent is the only, or most important, way to build trust, and the need for collective and democratic mechanisms. Jeegar reiterated that an opt out, rather than an opt in, to such programmes was the right answer, giving people control.
In the chat, somebody noted that Our Future Health is already doing relevant work, but it works well because it is voluntary, and that is ‘critical to retaining trust’. People have to be able to opt-in; an opt-out system will undermine trust. Someone else noted that, although the UK continues to perform well in international comparisons on digital government and data, scratch beneath the surface and you find ‘stagnation’, and low levels of trust in the UK are a serious problem. Another commenter thought that one reason this was an important conversation was that debates on data and AI policy ‘have become too separated’ in the last 18 months, and we also need to think about the role of security in trustworthy public sector data use. Others wanted more of these conversations - though one commenter pointed out that many of the civil society organisations trying to have these conversations are struggling with funding.
One recommendation for a Labour government
To finish, John asked each panellist for one recommendation for a Labour government:
- Jeegar: a lot of what TBI talks about is underpinned by digital ID – build this from One Login.
- Caroline: adopt the Algorithmic Transparency Recording Standard on statutory basis and ensured it is complied with.
- James: snuck in two – we should listen deeply to the practitioners who’ve now been working on this for many years; and, since in discussions like this tone is overly focused on risk (which is understandable), we need a positive vision to help narrow the huge capability gap between private sector and public institutions.
- Jeni: agreed on the need for that positive vision, but also called for public engagement in decisions about data and AI.
Quick links
- The video recording of the event, including auto-generated subtitles, is available on our YouTube channel. A copy of the Zoom chat is available.
- Liberty’s guide to non-policing solutions
- Algorithmic Transparency Recording Standard
- A recent Tony Blair Institute/Faculty report on a new model to transform the State
- Robodebt crisis news article
- Horizon scandal reflections
- Information about the People’s Panel on AI
- Thinking about local government digital
- TUC found public sector workers are often excluded
- Connected by Data’s work engaging people in this design of public service
- A Promising Trouble piece shared in the chat also covered community connectivity.
- A recent report from Promising Trouble proposed to make digital services inclusive by design
- A recent Demos report looked at humanising and bringing together government services at a local level
- A recent TBI report on health data
- Overview of data and digital proposals in the Labour manifesto
- Onward version
- Bridges court case
- A positive vision for data and AI policy
- Report finds ‘worrying vacuum’ in surveillance camera plans (Biometrics and Surveillance Camera Commissioner)
- Tech for Good Organisers Network
- Community Tech Network
- Tweet from a GP waiting room (Bob Fischer)
- A snapshot of workers in Wales’ understanding and experience of AI (Wales TUC supported by Connected by Data and Dr Juan Grigera, King’s College London)
- LocalGov Drupal
- The Public Voices in AI Fund (ESRC Digital Good Network)
- Community Connectivity (Promising Trouble)
- LifeCycle Jersey
- Let’s get real about Britain’s AI status (Onward)
- Smart Data Research UK
- Our Health Data Stories (Connected by Data)
- Public attitudes to data and AI: Tracker survey (Wave 3) (DSIT)
- The Stoke Model
- Introducing Data for Action (Data for Action)
- Open Data Camp 9: Manchester, 6-7 July 2024
- Dear next government, there is one way to rebuild trust… and it already exists (Civil Service World)
- AI Governance Talks
- The Data and AI Civil Society Network also organised a political hustings which you can watch on YouTube.