Data Policy Digest

Gavin Freeguard

Gavin Freeguard

Hello, and welcome to our ninth Data Policy Digest, bringing you all the latest data and AI policy developments. We have a wrap up of the AI Summit - and fully endorse this sentiment - but there’s also a lot about to happen, Bills-wise.

If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1, Digest #2, Digest #3, Digest #4, Digest #5, Digest #6, Digest #7 and Digest #8.

To receive the next edition of the Data Policy Digest direct to your inbox sign up here.

Data policy developments

Deeply DPDIB

The King’s Speech (the first since 2010 … sorry, since 1951) outlined the legislative agenda for the remainder of this parliament (drama aside, this is likely to be the last session before the general election, which must happen by the end of January 2025).

The Data Protection and Digital Information (No 2) Bill was one of those ‘carried over’ from the previous session, but it’s now known simply as the Data Protection and Digital Information Bill (not to be confused with the previous abandoned Bill of that name). And…

WE HAVE A DATE: it’ll return to the Commons for report stage on Wednesday 29 November.

The background notes to the Speech say the quiet part out loud: DPDIB ‘has been co-designed with industry, for industry, in order to maximise the economic benefits’, with little thought for the impact on the public. No wonder our friends at ORG are dubbing it the ‘Data Grab Bill’: ‘your data will be used against you and you’ll have less ability to do anything about it’. We’ll continue to follow the Bill - check out our resources for more.

In addition… Chris Pounder has written about how DPDI No 2 Bill undermines transparency of Artificial Intelligence development and training … and a statutory instrument making some changes to data protection legislation (that we wrote about last time) will need to be agreed by parliament - let David Erdos be your guide.

Bills, bills, bills

Online Safety Act Ofcom has published several draft codes of practice … there’s a new Online Safety Act Network led by Maeve Walsh and Professor Lorna Woods… the government published a ‘raft of voices’ in support of the Act.

Digital Markets, Competition and Consumers Bill It’s Commons report stage on 20 November before it heads to the Lords… and after weeks of speculation, the government is amending it - Politico’s take is ‘Rishi Sunak performs delicate balancing act in Big Tech lobbying battle’, caving in to some but not all of big tech’s asks.

Other Another Bill - the Investigatory Powers (Amendment) Bill - will have consequences for data … there was criticism of the lack of AI regulation in the King’s Speech from the Science, Innovation and Technology select committee and others … and campaigners also think it missed the mark on cyber reform … DSIT published a summary of the science and tech announcements (with some unfortunate typos), which also includes a legal framework on machine learning and (oddly not mentioned in that link) an Automated Vehicles Bill.

AI got ‘rithm

Apparently there was some international AI Safety Summit the other week.

Connected by Data worked with ORG and the TUC to coordinate a huge open letter of more than 100 signatories calling for the voices of communities, civil society and workers to be included. Covered initially by the FT, it was great to see the letter - and its arguments, made by many across civil society and other sectors - across lots of Summit media coverage.

Our other big initiative was the People’s Panel on AI. This brought together 11 representative members of the public to attend, observe and discuss key events at the AI Fringe, and they ended the week by coming up with seven recommendations.

We’ve got more links on the Summit than you could shake a Large Language Model at in a special section (AI-nnex?) at the end of this newsletter. That covers various scene-setting pieces, all the events we could find on film or transcript, official documents, and reactions, further coverage and much more besides.

So we’ll just concentrate on the key announcements up here.



  • The Bletchley Declaration: this agreement between the nations attending the Summit began by recognising many of the existing uses and harms of AI before focusing on the frontier - the agenda would focus on ‘identifying AI safety risks of shared concern’ and ‘building respective risk-based policies across our countries to ensure safety in light of such risks’, committing to ‘support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration’ (see also: preview, press release, No 10 summary)

  • Technology Secretary announces investment boost making British AI supercomputing 30 times more powerful: new computers in Bristol (Isambard-AI) and Cambridge (Dawn) will constitute the government’s AI Research Resource

  • UK unites with global partners to accelerate development using AI: ‘with Canada, the Bill and Melinda Gates Foundation, the USA and partners in Africa, the UK is helping to fund a £80 million ($100 million) boost in AI programming to combat inequality and boost prosperity on the continent’.



Now let us never speak of it again.

(Until South Korea’s virtual summit in six months’ time. And the French one in a year. And AI all day every day until the end of time. Maybe the killer robots would be welcome after all.)

NON-SUMMIT AI The LSE noted children weren’t mentioned at the summit, but have published AI and children’s rights: a guide to the transnational guidance … though the education secretary tweeted about how DfE is using AI ‘to reform education for the better’… DfE updated its generative AI guidance to include privacy and intellectual property concerns… ‘Most of our friends use AI in schoolwork’ (BBC Young Reporters)… and not everyone is impressed with £2m going to Oak National Academy on AI…

We need protection’: How AI, algorithms and oppressive digital tech are pushing workers to the brink (Big Issue, featuring Jeni)… Here’s what we know about generative AI’s impact on white-collar work (FT)… Hollywood actors’ union Sag-Aftra agrees tentative deal to end four-month strike (BBC News)… Andreessen Horowitz is warning that billions of dollars in AI investments could be worth a lot less if companies developing the technology are forced to pay for the copyrighted data that makes it work (Insider)… RSF and 16 partners unveil Paris Charter on AI and Journalism (Reporters Without Borders)… Like horses laid off by the car: BT tech chief’s AI job losses analogy draws anger (The Guardian), original comments made to Raconteur

AI fake nudes are booming. It’s ruining real teens’ lives (Washington Post)… AI: Fears hundreds of children globally used in naked images (BBC News)… Adobe is selling fake AI images of the war in Israel-Gaza (Crikey - which is the name of the publication as well as the right reaction)… The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis (Wired)…

Euractiv has seen a series of obligationsfor foundation models and General Purpose AI drafted by the Spanish presidency of the EU Council of Ministers…

Anthropic, Google, Microsoft and OpenAI announce Executive Director of the Frontier Model Forum and over $10 million for a new AI Safety Fund… OpenAI’s six-member board will decide ‘when we’ve attained AGI’ (Venture Beat)… Musk says his new AI chatbot has ‘a little humour’ (BBC News)… As someone who has worked for years in both “open” and “closed” AI companies, operationalising ethical AI, I’m dismayed by battle lines being drawn between “open” and “closed” (Margaret Mitchell)… Pre-print on how AI ‘safety’ became mainstream, ‘Building the epistemic community of AI safety’

Why I am glad artificial intelligence “hallucinates” (Prospect)… AI is at an inflection point, Fei-Fei Li says (MIT Technology Review)… AI outperforms conventional weather forecasting methods for first time (FT)… How we’re building a community of AI enthusiasts in Defra (Defra Digital)…

And… AI named word of the year by Collins Dictionary (BBC News)… Cambridge opted for ‘hallucinate’ … while Viz has its own take on how to tame AI.

DSIT up and take notice

In a previous life, this was me trying to chart all the ministerial changes as a government reshuffle unfolded. I am therefore very grateful to be able to focus on only the data/tech changes… At DSIT: George Freeman stepped down as science minister, replaced by Andrew Griffith (MP since 2019, ex-Sky, former head of Boris Johnson’s policy unit); while Paul Scully was sacked as tech and digital economy minister (El Bow, if you’re wondering), replaced by Saqib Bhatti (MP since 2019, accountant by background, former vice chair of the Tory Party). (Not forgetting that a few weeks ago, the department’s parliamentary private secretary - a junior aide - was sacked over his stance on the Israel-Hamas conflict.)

Elsewhere, John Glen became the 13th Minister for the Cabinet Office (responsible for data in government) in 13 years - just as a review of civil service governance and accountability from one of his predecessors, Francis Maude (2010-15, a whole FIVE YEARS), was published. It includes recommendations on the better use of data, and that the Government Digital Service and Central Digital and Data Office be ‘unified as a single team’. It came shortly after the most senior civil servant in the department and COO for the civil service, Alex Chisholm, announced he’d be stepping down next year.

The new Home Secretary, James Cleverly, had taken an interest in AI while at the Foreign Office - including discussing the dangers.

AS for what DSIT has actually been up to… secretary of state Michelle Donelan is in the US, where engagements included a speech to the Family Online Safety Institute and a trip to the National Institute of Standards and Technology … Politico have done a profile of her … while she’s one of many ministers with disappearing WhatsApp messages

George Freeman hosted a reception for the Secretary General of ASEAN, Dr Kao Kim Hourn, celebrating ASEAN-UK cooperation on science … ‘Philanthropic partnership unlocks £32 million for the future of best-in-class UK Biobank’, though the Observer reported they shared data with insurance companies (UK Biobank has called the report ‘highly misleading’)… there was a quantum showcase and agreements with the Netherlands and Australia … there’s a big job going at DSIT, as Director of Data Policy, which you have until close of play this Sunday to apply for… the department published July minutes from the Geospatial Commission as it announced some webinars on the National Underground Asset Register … published research on perceptions of digital subjects and careers … and the Government Office for Technology Transfer has funded the National Physical Laboratory’s development of an innovative new thermal imaging technology for foot ulcers. (You really do get everything with this newsletter.)

Deputy PM, Oliver Dowden, also gave an interview to the Times where he mentioned AI could reduce ministers’ workload, and Viscount Camrose appeared at the FT’s AI summit (another one!).

Parly-vous data?

There were DSIT questions in the Commons yesterday, while some of those down for DCMS today ask about AI. There’s also a written ministerial statement coming on ‘Online Safety Act - Super-Complaints Consultation’. As mentioned above, there was also a Commons statement and debate on the AI Safety Summit. The Commons Science, Innovation and Technology committee’s inquiry into AI governance continued, with Matt Clifford and senior DSIT official Emran Mian giving evidence - and the government’s response to the committee’s interim report is overdue. PACAC also took evidence on the use of, well, evidence last week.

In the Lords, one of the King’s Speech debates focused on science and tech, introduced by Viscount Camrose (Tim Clement-Jones on DPDIB is particularly worth a read - Labour’s Maggie Jones didn’t mention it). There was also a short debate on Artificial Intelligence: Regulation, and the Lords Communications and Digital Committee’s inquiry into LLMs continues, publishing written evidence and taking oral evidence, including this week from Meta and Microsoft. And there’ll be a question on EdTech in the chamber on 23 November.

Labour movement

On top of Peter Kyle’s contributions to the debate around the AI Summit (see section below, as if you could have missed it)… there were stories about Labour vowing to force firms developing powerful AI to meet requirements and mulling a ‘robot tax’ to penalise firms replacing staff with AI, as shadow Lords minister Steve Bassam told the House that any Labour government would ‘act swiftly’ on AI regulation.

Labour also announced a ‘Regulatory Innovation Office’ to hold regulators accountable for delays on decisions about new products and services, noting the Medicines and Healthcare products Regulatory Agency had a backlog of 966 clinical trial applications earlier this year which could lead to patients missing out on new life-saving medical treatments.

And might Labour free the Postcode Address File? (Background here.)

In brief

What we’ve been up to

In addition to our Summit activity - open letter, People’s Panel, AI and Society Forum workshop and a whole load of other media appearances and events - we ran a Connected Conversation on ‘Robust norms: creating and enforcing new data governance defaults at scale?’ and wrote up a previous one on Community negotiation on data rights.

We’ve got a Connected Conversation on Powerful Actions on Open Government, Data & AI (building on our workshop at the Open Government Partnership Summit) coming up, as well as a Design Lab on Resources for Effective & Inclusive Public Deliberation on Data & AI Governance.

What everyone else has been up to


Good reads

And finally: Excel recruitment time bomb makes top trainee doctors ‘unappointable’ (The Register)


This should keep you going from AI Christmas until actual Christmas… everything from the AI Safety Summit, Fringe, fringes of Fringe, that’s fit to print (and no doubt more).


Pieces before the Summit tended to focus on excluded voices, the frontier focus and geopolitical machinations…

AI policymaking must include business leaders (FT)… Rishi Sunak’s AI safety summit appears slick – but look closer and alarm bells start ringing (The Guardian)… Who’s in charge? Western capitals scramble to lead on AI (Politico)… Civil servants wanted to ban Israel from AI talks (The Times)… Key Players Remain Unconvinced About The Government’s AI Safety Summit (PoliticsHome)… Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK (TechCrunch)… How the UK’s emphasis on apocalyptic AI risk helps business (The Guardian)… Balancing China’s role in the UK’s AI agenda (Chatham House)… Nick Clegg compares AI clamour to ‘moral panic’ in 80s over video games (The Guardian)… MICHELLE DONELAN: Yes AI comes with significant risks – but it would be madness not to embrace its opportunities to make Britain better (The Sun)… How we‘re harnessing AI safely to improve people’s lives (PM on Twitter - more)… AI Minister Warns Regulation Must Move Faster Than Online Safety Bill (Camrose on PoliticsHome podcast)… Michelle Donelan, Peter Kyle and Palantir’s Alex Karp appeared on Sunday with Laura Kuenssberg (newslines from Palantir, some unimpressed with the questioning, as was - a week later - a former minister with the Sunak/Musk interview)…


Let’s start with video, transcripts and official documents from the various events.


A Michelle Donelan speech kicked off the AI Fringe at the British Library (what a time to suffer a cyberattack)… followed by a scene setting discussion on AI and Society … discussions on navigating the AI hype cycle … preventing AI from perpetuating privilege and widening gapsuncomfortable truths and trade-offs on inclusivityfostering responsible AI research … keynotes from Dame Ottoline Leyser (responsible research) and Harriet Harman MP (designing for the margins)… The Citizens hosted The People’s AI Summit … and another Donelan speech at a Guildhall dinner to bring the day to a close…

TUESDAY We ran a workshop on an alternative agenda for Bletchley at the AI and Society Forum, which Defend Digital Me also ran an event at… the Existential Risk Observatory ran some pre-Summit talks (after another event a few weeks before)… at the main Fringe, there were discussions on defining AI safetystandards for responsible AIevaluating AI‘why we need this conversation’ … keynotes on verifying content with Adobe’s Andy Parsons and defining AI safety with Ada’s Francine Bennett … fireside chats on evaluating AI and ‘why we need this conversation’ (with Nick Clegg)… and the Government published a summary of the four ‘road to the Summit’ roundtables with Royal Society, the British Academy, techUK, The Alan Turing Institute and other events with the Founders Forum, British Standards Institute and others (24 in total)…

WEDNESDAY The Safety Summit itself got underway, with the opening plenary starting with Michelle Donelan and finishing with King Charles… the PM did a podcast with Politico … IFOW held ‘Making the Future Work’, an AI Fringe Summit on the Future of Work… the MKAI Global AI Ethics and Safety - People’s Summit took place… back at the British Library there were discussions on AI and the creative industriesmeaningful public involvement in AI governancebiotech AI … fireside chats on responsibly deploying AI for creatives and consumers and AI and climate … keynotes on DNA history from Ginkgo Bioworks’ Anna Marie Wagner and on consumers and competition from the CMA’s Marcus Bokkerinkthe Summit closing plenary, with contributions from stage and audience, brought the day to a close… the Government published a summary of the day’s discussionsthe PM appeared on Peston


Most of Thursday’s Bletchley action was behind closed doors, but the PM did a press conference and the UK published an overall summary, a summary of roundtables, summary of the ‘State of the Science’ report and summary of the discussion on safety testing … AI Fringe discussions on the future of workbridging the AI skills gaphow to engineer responsible AIelections, misinformation and democracyfairness, bias and law … from the Digital Regulation Cooperation Forum … fireside chats on the future of work and open source in AI … a keynote on democratic institutions and processes from UNESCO’s Gabriela Ramos … and the day ended with Rishi Sunak interviewing Elon Musk and overshadowing everything the Summit somehow managed to achieve…


Panels wrapping up the AI Fringe with Matt Clifford, Dame Angela McLean and Peter Kyle (Kyle also speaking to the Telegraph, AI would have saved my mother from lung cancer, says Labour MP)… officials from the UK and US and Lawyers Hub’s Linda Bonyo … and Summit attendees Francine Bennett (Ada) and Lila Ibrahim (Google DeepMind) … and our People’s Panel gave their verdict on the week.


Ian Hogarth (Frontier AI Taskforce) and Matt Clifford (PM’s Summit representative) tweeted threads (Clifford was also profiled by Politico and did a thing for No 10)…

A joint statement from the civil society organisations at the Summit … from the UK’s national academiesAI Now published some remarks their executive director, Amba Kak, made at the Summit… the Blavatnik School’s Ciaran Martin wrote about ‘Optimists, doomers and securo-pragmatists: Reflections on the UK’s AI safety summit’… from Marietje SchaakeConversation on AI should not just be driven by the technology but a vision of the society we want to live in, writes the British Academy’s Hetan Shah… a thread from Kanjun Qiu of Imbue AI

Michelle Donelan made a statement in parliament, which prompted a short debate … the chair of the Science, Innovation and Technology committee called for near-term AI governance challenges to be addressed with the same urgency and unity as ‘frontier’ risks…

The FT welcomed the move towards the mundane: ‘In AI, focus on technocrats not terminators’… Sam Coates (Sky News) on the Sunak/Musk ‘interview’AI Safety Summit Lauded As “Success” But MPs Question What’s Next (PoliticsHome)… The Hypocrisy of the AI Summit: The UK is Doing Nothing About Political Misinformation (David Puttnam for Byline Times)… The AI Debate Is Happening in a Cocoon (The Atlantic)… U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress (TIME)… Rishi Sunak has reason to consider his AI summit a success – though voters aren’t likely to notice (IfG)… Sunak the influencer: How the UK’s AI summit surprised the skeptics (Politico)… The week when AI and geopolitics collided - a nuanced summary from Digital Bridge (Politico)… AI is not the problem, prime minister – the corporations that control it are (The Observer)… AI as Co-Pilot? Why work and workers deserve better (IFOW)… World Powers Say They Want to Contain AI. They’re Also Racing to Advance It (Wired)… ‘It’s not clear we can control it’ what they said at the Bletchley Park AI summit (The Guardian)… UK AI summit is a ‘photo opportunity’ not an open debate, critics say (New Scientist)… What Can the UK Teach the World About AI Safety? (Byline Times)… US and China join global leaders to lay out need for AI rulemaking (Politico)… Rishi Sunak’s AI plan has no teeth – and once again, big tech is ready to exploit that (The Guardian)…

Jeni threaded thoughts on marking homework, climate and safety, and who was excluded - and an overall assessment of the Summit and how the AI Safety Institute might play out … she also appeared on the Evening Standard podcast … We’ve had dangerous AI with us for decades, and never had a summit for racist algorithms (Susannah Copson, BBW)… Joint Statement on AI Safety and Openness (Mozilla)… another (via the Daily Mail)… UC Berkeley Center for Human-Compatible AI published a statement from attendees of their ‘International Dialogue on AI Safety’


AI-pocalypse? No. (applause, Labour Together)… “Move slow and fix things”: Britons concerned by current pace of AI deployment (Luminate)… Public Supports AI Safety Summit But Thinks Big Tech Has Too Much Influence (PoliticsHome)… 83% of Brits demand companies prove AI systems are safe before release (AI Safety Communications Centre)… while Ipsos spoke to some experts for their report, Debating Responsible AI: The UK Expert View


The less said about this the better, FCDO… while the FT reported from AI’s Human Safety Summit.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more