What Labour should(n't)do on data and AI policy in the first 100 days

Adam Cantwell-Corn

AI, data rights, the power of big tech, online safety… digital, data and technology policy is breaking into the political mainstream like never before. Any incoming Labour government will need to make the most of the opportunities afforded by new tech and the better use of data to deliver on its industrial strategy and public services plans – while protecting the public against the risks, hype and vested interests. So what should a Labour administration do – and not do?

On Monday 9 October 2023, from 17:30 at The Tate Liverpool, CONNECTED BY DATA and the Fabian Society brought together a wide spectrum of voices for a vibrant discussion on one of the hot button issues of our time.

Gavin Freeguard, policy associate at Connected by Data, hosted the conversation featuring:

Live tweets of the event

We live tweeted the event and the unrolled thread of those tweets is captured below.

We’re about to kick off our event at Labour Party Conf - Follow for live tweets. #LPC23 What should(n’t) Labour do on data and AI? We’re also introducing ‘Towards a progressive vision on data and AI policy’ - which brings together a range of voices to think about what can and should happen in this area, centring that there are political choices to be made.

First up, Matt Rodda MP, shadow minister for AI, emphasising the beneficial applications of AI, including at his local hospital and in healthcare, and economic growth. On the AI White Paper, Matt Rodda MP criticises the Gov’s slowness on getting to grips with the issues. Here is our White Paper response.

Connected by Data’s Jeni Tennison says there are x3 things that need addressing: 1) Broadening from ‘AI safety’ as too narrow. Labour needs to have a wider focus, and says that the AI Taskforce needs to be scrapped and rethought to include hospitals, workplace, and communities 2) The Tories have “wrecked public trust on AI” - on Palantir contracting, facial recognition - We need for “radical transparency”: Need impact assessments, before and after deployment, and audits with public and civil society participation. The public sector should lead the way. 3) Jeni says we need to get rid of the false dichotomy of ‘innovation vs regulation/protection’. Instead we need to have a public purpose approach so that tech serves social needs, not merely company whims.

Sasjkia Otto from Fabian’s Society cites that from our ‘progressive vision’: Tech should help people help people, and that we need to understand how the risks and rewards are shared - including workers, patients, communities. And says that people aren’t seeing the benefits yet: Need to get regulatory gaps filled, public investment for public good and tackling the excessive power of a handful of companies that ‘get treated like heads of state’. She continues that the Gov’s AI Summit risks entrenching tech monopolies, and needs a rethink. Also, ‘economic growth’ needs balancing with key concerns like human rights and inequality of power over shaping how tech is used. And the Data Protection and Digital Information Bill needs to include vital amendments that give workers voice and power at work. Our resources on that are on our website.

Next up with Mary Towers from the TUC’s AI project: “There’s an urgent need for regulation of AI at work”. We’re supporting the TUC on just this. Mary says that multi-disciplinary approach that manages the complexity of near/far term risks and the cross cutting nature of AI. Mary says that Labour could: 1) A solid foundational vision is needed now, that establishes red and green lines on AI e.g in the workplace “only AI that is explainable and transparent” and a green line that centres worker-led AI development. 2) Labour needs to legislate for AI in the workplace, now. 3) Mary says Labour needs a commitment to collectivism and social partnership, including trade unions: “it is the counterbalance to the dominance of tech corporates”. This means repealing anti-trade union laws, and rebalancing power in the workplace.

Responding to audience Q on AI-induced ‘work intensification’, Matt says that AI augmentation can assist workers, and that opportunity needs to be seized while managing harms - referencing Labs plans on workers rights. Matt invites contact from the audience with detailed thoughts.

Tom Adeyoola, tech entrepreneur and panellist on Labour’s Start-up Review, kicks off by cautioning that AI can mean many things. The core question is what is the implication of mass automation - and what does that mean for society. And cites our ‘towards a progressive vision’ principles as a good starting point. Tom emphasises the core question of ‘who’ will benefit from AI - and that government institutions and processes need to be updated to incorporate such principles.

Shameem Ahmad from Public Law Project says public sector is using automated decision making on tax, welfare, law enforcement, immigration, child protection, but opaque practises are frustrating efforts to hold government to account. “Without urgent action we will sleepwalk into a world where Gov makes unlawful and harmful automated decisions, efficiently”, citing the Dutch benefits case. And the Australian ‘robodebt’ case. The UK government use of AI is not transparent - so the Public Law Project has had to fight to reveal them here.

If the next Gov is serious about using tech for good, they must build trust says Shameem.

Mat Lawrence from Common Wealth says Labour should not falsely trade off ‘participation, safety or progress’. We need to democratise AI - not thin, market based ideas but its governance, use, benefits. So what should Labour do? Mat says the AI story needs to align with the green prosperity plan and Common Wealth argument is that this cannot be left to the whims of the market. The state must lead and direct. Mat says 3 areas might be: 1) Public compute as a national resource 2) Tackle the ‘expropriation of the data commons’ by ‘stewarding data’ for the purpose of the commons 3) Anti-trust policies that challenge concentrated market power. Cites the USA’s FTC.

We’re wrapping up with some brilliant contribs from the audience on working with technologists, the international dimension and how public health data can be utilised for public value. #LPC23

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more