Data Policy Digest

Gavin Freeguard

Gavin Freeguard

Hello, and welcome to the third Data Policy Digest from Connected by Data! The Digest aims to bring you all the main developments in data and digital policy, including the Data Protection and Digital Information Bill, the AI White Paper and beyond. (Often way beyond.) It’s been more than six weeks since the last edition (I’ve been away). Good job it’s been a quiet month or so in data policy…

If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via jonathan@connectedbydata.org. We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1 and Digest #2.

Data policy developments

Deeply DPDIB

Committee stage came to an earlier end than expected – on 23 May, rather than the expected 13 June, which may have led to some organisations scrambling to get their written evidence in. You can read everything that was submitted on the Bill website.

Committee stage had kicked off with a day of oral evidence, which featured Connected by Data’s Jeni Tennison and several other civil society groups, alongside voices from business. The transcripts are on the Bill website.

We await details of when the next stage will be – that’ll be the report stage, where MPs get to consider amendments that were examined in committee.

In the meantime…

Bills, bills, bills

The Online Safety Bill has completed its Lords committee stage – where peers considered various amendments – and will start report stage on 6 July.

There have been calls for better researcher access to platform data: a group of UK academics signed a letter calling for better access, with support for an amendment. (Mozilla has launched its own initiative to try to fill the gap.) Factchecking charity Full Fact and various health organisations called for health misinformation to be considered as part of the Bill. More than 80 organisations have signed a letter concerned about encryption; journalist Heather Brooke recently write about the OSB’s role in surveillance.

One successful amendment is around bereaved families being able to access their children’s data from social media firms, while the government will also move an amendment around jail terms for sharing explicit images without consent. Ofcom has published updated guidance on how it intends to regulate, while Graham Smith has written about the ‘shifting paradigms in platform regulation’ around the OSB, and Politico looks at how the OSB is influencing tech regulation elsewhere, focusing on Baroness Kidron.

The Digital Markets, Competition and Consumers Bill is now in Commons committee stage. Chief exec of the Competition and Markets Authority, Sarah Cardell, touched on the Bill in a keynote conference speech, rejecting the idea it could ‘undermine innovation ad growth’. Meanwhile, the Regulatory Policy Committee, which scrutinises the evidence underpinning new legislation, was not impressed with the Bill’s impact assessment.

The government has also ‘fired the starting gun’ on the new regime for products with internet connectivity, brought into being by the Product Security and Telecommunications Infrastructure Act, which received royal assent in December.

More generally, the Lords have expressed concern at the use of secondary legislation, something which has come up in conversation about several of the digital/data bills.

AI got ‘rithm

There’s been so much AI-related policy news, I should probably have got ChatGPT to help me structure it.

Let’s start in the UK. First, many civil society organisations have been working on their submissions to the consultation on the AI White Paper, which closed last week. Our response is on our website (we also tweeted about it); Big Brother Watch have also published theirs, as have the TUC and the West Midlands Police Data Ethics Committee; and the Ada Lovelace Institute published three tests they would be applying in their analysis (Ada also hosted a roundtable, which Connected by Data’s Adam went along to). Keep an eye out for other published responses over the coming days and weeks.

AI has received prime ministerial attention. There were roundtables organised by Number 10 and DSIT – with industry, the civil society and public voice being notably absent. Hopefully that will not be the case with the international AI summit the UK will host in the autumn. The Prime Minister’s keynote speech (video) at London Tech Week included the detail that Google DeepMind, OpenAI and Anthropic will give early access to their models ‘for research and safety purposes to help build better evaluations and help us better understand the opportunities and risks of these systems’. (The Centre for the Governance of AI looks at what a foundation model information-sharing regime might look like.) There’s also been chatter about whether the UK could host an international body on AI: the Washington Post makes the case for why the UK might actually be well-placed to write global rules on AI. The UN Secretary General thinks such a body might be a good idea (while we’re on the UN, the World Health Organisation has called for safe and ethical AI in health.) The Economist welcomes the PM’s enthusiasm though thinks his plans fall short, in an article worth visiting for the photo alone.

The UK government announcements kept coming… Ian Hogarth will chair the UK’s new AI Foundation Model Taskforce, announced back in April. He’s written about the taskforce for The Times (and the Taskforce is hiring multiple roles). We’ve been concerned that recent AI initiatives have lacked civil society, community and public voices – so it’s good to see Ian has set up a form if you’d like to get in touch. He also wrote a piece on the need to ‘slow down the race to God-like AI’ back in April.

DSIT announced £54m in university funding to develop cutting edge AI. Some £31m will go to work on responsible and trustworthy AI at the University of Southampton – Responsible AI UK are on Twitter and have been tweeting about their team(s). Southampton’s Dame Wendy Hall sat down with Sky News’ political editor, Beth Rigby, to talk all things AI.

It wouldn’t be AI policy without a few joint letters. We signed one convened by our friends at the Public Law Project, expressing concern that the government’s current approach does not properly protect people from the adverse effects of automated decision making. PLP put forward an alternative white paper with a greater focus on transparency, picked up by the BBC among others. There was also a joint letter from the Fairness, Accountability, and Transparency (FAccT) community, calling for ‘sound policy based on the years of research that has focused on this topic’.

Politico have also highlighted ‘the 14 people who matter in UK AI policy’. (Civil society and academia are again missing.) Civil Service World have a big feature on the use of ChatGPT in government (and got some officials to play with it), while Computer Weekly explored the risks and opportunities for businesses. The ICO is also reviewing the use of generative AI.

Elsewhere… Former prime minister of New Zealand, Jacinda Ardern, wrote for the Washington Post: ‘There’s a model for governing AI. Here it is.’ Spoiler: it’s a collaborative, multi-stakeholder one. IEEE has looked at the different international approaches.

In the US, the White House has also announced various actions, with President Biden meeting several CEOs. There have been Senate Committee hearings, and the House Committee on Science, Space and Technology has also heard from witnesses on ‘Artificial Intelligence: Advancing Innovation Towards the National Interest’. It’s not just in the UK where there’s a sense of falling behind, as Fast Company looks at Senate Majority Leader Chuck Schumer’s efforts. The Washington Post, meanwhile, went ‘Inside the Senate’s crash course on “AI 101”’. New York Magazine has an interview with FTC chair, Lina Khan on how to regulate big tech and AI. (Incidentally, the FTC has started clamping down on dark patterns when it comes to unsubscribing from services – starting with Amazon Prime.)

In Europe, Access Now was pleased with ‘a much-improved’ EU AI Act, though still has concernsas does EDRI. Computer Weekly has a report. Law firm Burges Salmon have published a flowchart to help people understand the AI Act. Marietje Schaake, adviser to European Commissioner for Competition, Margrethe Vestager, wrote that ‘We need to keep CEOs away from AI regulation’. Politico goes inside an apparent fight between Vestager and internal market commissioner, Thierry Breton, over AI. Stanford University research has looked at how foundation models would comply with the Act. And we shouldn’t forget the Council of Europe’s treaty efforts (more here from an unimpressed anonymous data protection officer).

Some of the big AI companies made news beyond their various high level political meetings. OpenAI’s CEO says the age of large models is already over, according to Wired; while OpenAI dealt with a leaked document suggesting they were lobbying to water down regulation (contrary to their public position); and AI expert Hal Daumé III looked through another OpenAI governance document (spoiler: he was unimpressed). DeepMind is proposing a new framework for thinking about novel risks.

And the trilogy of AI godfathers (the three winners of the 2018 Turing Award) continue to provide as much drama as any Francis Ford Coppola film. One, Meta’s Yann LeCun, does not think AI will take over the world or destroy jobs. Another, Yoshua Bengio, ‘feels ‘lost’ over life’s work’. The third, Geoffrey Hinton, who quit Google a few weeks ago, gave a lecture at Cambridge on ‘Two Paths to Intelligence’ – the New Statesman also interviewed him in a piece called ‘The godfathers of AI are at war’. Though useful for understanding Hinton’s view, that didn’t really get at why his resignation proved so controversial – former Googlers expressing their disappointment to Fast Company about his silence when they quit while ringing alarm bells over AI. In general, there is much media coverage and alarmism over existential risks – government adviser Matthew Clifford felt the need to call for calm over how some of his comments had been represented – at the expense of thinking about the harms AI can already cause. Signal president Meredith Whittaker, former UK cybersecurity chief Ciaran Martin, and technologist Rachel Coldicutt are among many calling for a more measured and realistic focus on AI harms.

DSIT up and take notice

Beyond all the AI activity… DSIT published the long-awaited National Semiconductor Strategy. Mentions of ‘world leading’: 8; mentions of ‘world-leading’: 9. We’re world-leading in the number of government strategies around data, digital and tech, if nothing else. DSIT has also published some research on the UK’s safety tech sector.

The Centre for Data Ethics and Innovation published ‘Enabling responsible access to demographic data to make AI systems fairer’. Data policy veterans may recall this was a subject covered by the government’s Data: a new direction consultation back in autumn 2021, though it didn’t feature in DPDIB or the AI White Paper. CDEI also published a portfolio of tools for assuring AI systems.

The Geospatial Commission published the Geospatial Strategy 2030, which has lots on data and harnessing AI. You can also read all the submissions to one of the consultations informing the strategy.

The ministers have been busy. Secretary of State, Chloe Smith, spoke at London Tech Week, the Global Forum for Technology, and the Robotics and Automation Conference. AI minister, Viscount Camrose, is now on Twitter (he’s not currently following any of the civil society organisations interested in data and AI – maybe we should make him a list?). DSIT minister Paul Scully failed to make the shortlist to be Tory candidate for London mayor (PUBLIC technology entrepreneur and former Cameron adviser, Daniel Korski, did – although has been hit by controversy). And science minister, George Freeman, is the latest guest on the BBC’s Political Thinking podcast.

DSIT has also been busy internationally, holding a dialogue with Singapore, agreeing cooperation with Canada on quantum technologies, reaching an in principle agreement with the US on a ‘data bridge’, and agreeing strengthened science and tech ties and a semiconductors partnership with Japan.

Labour movement

Tech Monitor have just published a long piece asking ‘what are Labour’s tech policies, exactly?’ It features our own Jeni, who gets the last word: she’s looking for Labour ‘to really face the fact that data and AI is political. It’s about power – and it’s about whose side you are on as a government’.

This came after Labour leader, Keir Starmer, told London Tech Week that the UK should avoid the same mistakes as deindustrialisation with AI technology advances, and that – while there is a risk of communities being left poorer – there are also opportunities for the public good. He wrote for The Independent along the same lines. Ahead of techUK’s Policy Leadership Conference, shadow digital secretary Lucy Powell spoke to the Guardian about Labour’s emerging tech policy, including licensing and tighter rules for those developing AI. And shadow chancellor, Rachel Reeves, channelled Harold Wilson, touching on the ‘green heat of technology’ in a New Statesman profile.

More generally, the Party is giving more detail on its five missions, most recently in a speech about making Britain a clean energy superpower. And there was a leak of the various policy proposals collated by the National Policy Forum (submitted by party members and affiliates as part of Labour’s internal policy processes), which could give a clue to some of the ideas that might make it into a manifesto.

Parly-vous data?

Labour MP, Mick Whitley, introduced a private members’ bill to the Commons: the Artificial Intelligence (Regulation and Workers’ Rights) Bill. It follows his Westminster Hall debate on AI and the Labour Market back in April. (More on Mick below.)

And in Cardiff Bay… a member of the Senedd used ChatGPT to write a speech on Wales winning the World Cup of Darts. You can decide for yourself whether it hit the bullseye, or was just a load of bull, using TheyWorkForYou which now covers the Welsh Parliament.

In brief

Making a mockery of that subheading, there’s been quite a lot:

What we’ve been up to

It’s been…. somewhat busy:

  • On 20 June, we organised an event with the TUC in parliament on ‘the worker experience of the AI revolution’. Chaired by our very own Jeni and the TUC’s Mary Towers, we heard from three workers already dealing with the consequences of tech at work: Garfield Hylton, who works at Amazon, and whose data story is one of those featured in our recent report; Luke Elgar, from the Royal Mail; and Laurence Bouvard, actress and voiceover artist. We had three politicians on the panel: Labour’s Mick Whitley, who is sponsoring a private members’ bill on Artificial Intelligence (Regulation and Workers’ Rights), shadow data minister Steph Peacock and former tech minister Damian Collins. There were a lot of parliamentarians in the audience to hear from the workers, too. You can catch up via our live-tweeting from the event.

  • We’ve been at various conferences: Jeni and Tim were in Costa Rica for RightsCon, where we co-hosted a policy design lab and roundtable on ‘a global policy agenda on collective data governance’ with with Aapti Institute, Research ICT Africa, Aapti Institute and The Datasphere Initiative, and Jeni spoke on a panel about reimagining data rights; and I headed to the MyData conference in lovely Helsinki to talk about collective data futures, alongside others including Demos Helsinki.

  • We convened a ‘Future Data Narratives Design Lab’, aiming ‘to start the co-creation of a strategy for shifting the inaccurate, damaging way data is currently framed & understood in media, policy and industry narratives’.

  • Tim blogged about data valuestwice – and LLMs, building on the Global Data Barometer.

  • We’re also thinking about our plans for party conference season in September/October. If you’re going, and looking for speakers, do get in touch.

What everyone else has been up to

Events

Good reads

Plenty on AI, as you might expect:

A few pieces on how data and AI has, or could, affect us and our governments:

A couple of profiles:

And finally…

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more