Data Policy Digest
Hello, and welcome to our eighth Data Policy Digest, bringing you all the latest data and AI policy developments.
If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via gavin@connectedbydata.org. We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1, Digest #2, Digest #3, Digest #4, Digest #5, Digest #6 and Digest #7.
To receive the next edition of the Data Policy Digest direct to your inbox sign up here.
Data policy developments
Deeply DPDIB
It will be Commons report stage (when MPs discuss amendments suggested in committee stage) and then the Lords for DPDIB. Expect it some time after the state opening on 7 November (we hear rumours it could be at the end of November). A DCMS official suggested the Bill ‘will not be adopted until around middle of 2024’.
One proposed amendment is on processing of data by the police - the Police Federation has more on the campaign behind that - while the outgoing Biometrics and Surveillance Camera Commissioner has expressed concern about a gap in monitoring the use of police powers to retain biometric data.
A statutory instrument we flagged last time out (that changes references in law to the Charter of Fundamental Rights of the European Union, to the European Convention on Human Rights instead) has been recommended for debate and approval by parliament (known as affirmative parliamentary procedure) by a parliamentary committee. Their recommendations aren’t binding, but the government has accepted them every time so far.
And… DSIT has published the Data Protection and Journalism Code of Practice 2023 … The Register looked at what to expect when the UK-US Data Bridge comes into force this week… and government published an evaluation of the International Data Transfer Agreement.
Bills, bills, bills
Online Safety Bill The Bill is now law, prompting reactions from organisations including Glitch, Carnegie UK, Which?, 5Rights, and ORG, and stories from outlets including the BBC and Wired. The Secretary of State also discussed antisemitism and violence with social media companies.
Digital Markets, Competition and Consumers Bill Like DPDIB, it’s Commons report stage next. The Digital Competition Expert Panel - who made the original recommendation for the Bill - are unhappy about proposed changes, saying they would upset the balance between the interests of big tech platforms and their users. The chair of the Lords digital committee is similarly unimpressed. As are newspapers including the Mail (their angle includes tech giants bidding ‘to avoid paying media outlets to use their news’, who could possibly have predicted etc), which also carries criticism of big tech from former digital secretary, Nadine Dorries. Meanwhile the CMA launched a market investigation into cloud services.
AI got ‘rithm
ONLY FIVE SLEEPS UNTIL (FRONTIER) AI POLICY CHRISTMAS!
Most of us are on Santa Sunak’s naughty list, with only a select few making it to Bletchley Park for the AI Safety Summit on 1-2 November. The programme has been published for those that will be there, alongside a shiny Summit website. Day 1 will be led by DSIT Secretary, Michelle Donelan, with discussions on ‘Understanding Frontier AI Risks’, ‘Improving Frontier AI Safety’, and ‘AI for good – AI for the next generation’. Your reminder that the UK government defines ‘frontier AI’, the focus of the Summit, as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models’. There’s a blogpost from the University of Nottingham on the origins of the term.
Day 2 will see the PM convene ‘convene a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good’ as Donelan agrees next steps with international counterparts.
The four ‘road to the summit’ events have all happened, at the Turing, the British Academy, techUK and the Royal Society. Michelle Donelan answered questions on LinkedIn, while one of the PM’s representatives to the Summit, Matt Clifford, answered questions on Twitter (and blogged on LinkedIn, as well as podcasting).
The Fringe
For those not heading to Bletchley, there are lots of events at the AI Fringe at the British Library (Monday to Friday) and elsewhere. On Tuesday, there’s the AI and Society Forum, where Jeni and I will be running a session on ‘Writing an alternative agenda for the AI Safety Summit’. The Citizens have a ‘People’s AI Summit’ going on.
CONNECTED BY DATA is also running a People’s Panel on AI, where twelve randomly selected, representative members of the public will attend, observe and discuss key events at the AI Fringe and produce a public report giving their verdict on AI and their recommendations to government, industry, civil society and academia for further action. You can hear their initial verdict at an event on Friday, and sign up for updates.
What will come out of the Summit
There’s been a lot of speculation about who is - and isn’t - going to be at Bletchley. I had a freedom of information request delayed, while Byline Times have criticised the lack of transparency. We know of some world leaders who will and won’t be going, but we’ll just have to wait until Wednesday.
And there’s been lots of speculation about what would be agreed. Politico reported on the 3rd that the UK might create a new AI Safety Institute (perhaps evolving from the Frontier Model Taskforce)… The Guardian reported on 10th that the focus wasn’t on a single new institution… on a Politico podcast on the 17th, Michelle Donelan dampened rumours that the Summit would lead to a new global regulator but did not deny the Frontier Taskforce could develop into an AI Safety Institute… the FT reported on the 19th that a global advisory group on AI (distinct from a planned UK AI safety institute) might come out of the Summit (and Politico that China would be invited to be part of a global research body)… and then on the 26th Politico parsed leaked communiques to detect a ‘blow’ to create a global research body.
Later that morning, a speech from the PM laid out that the UK would establish ‘the world’s first AI Safety Institute’ and also proposed ‘a truly global expert panel, nominated by the countries and organisations attending, to publish a State of AI Science report’, taking inspiration from the Intergovernmental Panel on Climate Change. There were also some funding commitments and recommitments on top of £37m announced previously. Sunak’s speech followed one earlier in the week from Donelan, and the publication of a discussion paper on the capabilities and risks of frontier AI. The Government has since published a report on safety policies from AI companies and an international survey of public opinion on AI safety (Ada have also just published a review on what the public thinks about AI).
What are the prospects for success? Politico thinks the odds - and ‘shambolic organisation’ - may be against the PM but he could yet pull off a diplomatic coup (the FT also has an overview). One of those with a golden ticket to Bletchley - Connor Leahy, chief exec of AI safety research company Conjecture - has said the Summit risks achieving very little, with powerful tech companies capturing the agenda. He’s one of the people behind a new ‘Control AI’ campaign (who commissioned some polling this week). The Telegraph (twice) worries about the same. And ‘Britain’s Big AI Summit Is a Doom-Obsessed Mess’ - tell us what you really think, eh Wired?
We’ll be holding a call online at 2pm on Tuesday 7 November to reflect on all the Summit-related activity - drop us a line if you’d like to come along.
Summit to use as a media hook
Naturally, many organisations have published reports or previews ahead of the Summit, including… BCS (the PM should put ethics at the top of the Summit agenda)… the RSS (‘statistics and data science are at the heart of the AI movement’)… the Minderoo Centre’s Gina Neff (who will be keeping a summit diary)… RUSI (six expert views)… Global Counsel (on regulating generative AI)… the Royal Academy of Engineering (expert views)… the Adam Smith Institute (on the tipping point towards superintelligence)… CPS (on the risks and opportunities of regulation)… Onward, TBI and the Startup Coalition (with a startup roadmap)… IPPR (on AI for public value creation and three policy pillars)… medConfidential’s Sam Smith (‘it is the summit it is’)… King’s College London (expert views)… Chatham House (generating momentum for effective governance)… and the Bennett Institute, Minderoo Centre and ai@cam at Cambridge (on generative AI).
There’s also been the Godfathers part whatever-we’re-on-now, with Meta’s Yann LeCun saying AI will never threaten humans while Geoffrey Hinton and Yoshua Bengio are among those being among the co-authors of a paper criticising some AI development as ‘utterly reckless’. Elsewhere, Mustafa Suleyman and Eric Schmidt called for an AI equivalent of the IPCC (Ada’s Andrew Strait is less convinced), Demis Hassabis said AI risk must be treated as seriously as climate crisis, and the Guardian looked at the divides between (exclusively male) AI pioneers. They also ran a useful piece on ‘a day in the life of AI’.
And on a lighter note… DSIT used AI to generate 1990s yearbook-style photos of its leadership team, and like some predictive supercomputer I already know which two are your favourites.
Let’s talk about summit else
Rattling through everything else… The chief secretary to the Treasury held a roundtable with ‘AI experts’ on driving public sector productivity (some of them visible in a tweet)… The House has an interview with AI minister, Viscount Camrose (his favourite film is 2001: A Space Odyssey and he found ChatGPT ‘so much better than people’ at summarising the Online Safety Bill)… one tech boss urged the UK to ‘use Brexit freedom to become global AI force’ … and a UK tribunal agreed with Clearview AI that the ICO had no jurisdiction, as the ICO issues a preliminary enforcement notice against Snap’s genAI chatbot …
The UN’s AI advisory body has its first meeting today … Brookings looks at the Global South’s stake in AI governance dialogues… Freedom House found that in at least 16 countries, ‘this new tech was used to sow doubt, smear opponents, & influence the public debate’…
Deepfakes played a part in the Slovakian election … the EU is ‘in touching distance’ of world’s first laws regulating artificial intelligence, as Roadmap looks at what it might mean … though the OECD warns ‘vague concepts’ will not protect citizens…
The US may be moving away from an AI Bill of Rights … as an executive order on AI is expected next week… with a possible focus on procurement … as Semafor looks at think tanks and Politico at advisers influencing policy in Washington…
AI Now wonders if the Food and Drug Administration provides a model for AI regulation… TIME looks at the analogies with nuclear energy regulation … The FT thinks we need a political Alan Turing to design AI safeguards … but do we?… the FT also notes that workers could be the ones to regulate AI … CIGI says Humanity Must Establish Its Rules of Engagement with AI — and Soon … and they think the public is missing from national AI strategies … while Inioluwa Deborah Raji welcomes ‘the grounded complexity brought by unions & civil society’ in The Atlantic…
A new ‘data poisoning’ tool ‘lets artists fight back against generative AI’… while Politico ponders ‘the end of photographic truth’ and Google Pixel’s face-altering photo tool sparks AI manipulation debate … The Internet Watch Foundation says ‘worst nightmares’ come true as predators are able to make thousands of new AI images of real child victims… CJR have a look at how newsrooms are using AI … the BBC has published guidelines on generative AI …
Anthropic worked with the Collective Intelligence Project ‘to curate an AI constitution based on the opinions of around 1000 Americans (constitutions are also mentioned in an FT article, ‘Broken ‘guardrails’ for AI systems lead to push for new safety measures’)… ChatGPT parent OpenAI seeks $86bn valuation…
DeepMind have ‘developed a framework to evaluate its risks at the point of technological capability, human interaction & systemic impact’… the Partnership on AI have published Guidance for Safe Foundation Model Deployment … Microsoft’s Copilot is coming … New York Magazine wonders what we know about OpenAI’s Sam Altman, ‘the Oppenheimer of Our Age’ … Apple and jobs (not Steve)… Vox says ‘the founders of Anthropic quit OpenAI to make a safe AI company. It’s easier said than done’ …
An interview with Kate Crawford includes exposing artificial intelligence’s true costs … as there’s a warning that the AI industry could use as much energy as the Netherlands …
And after all that, the LSE says ‘Artificial Intelligence’ is a misnomer anyway.
DSIT up and take notice
Continuing with AI… an independent report praised the UK’s AI Standards Hub … the Frontier AI Taskforce has brought in ‘leading technical organisations’ to help research risks … there’s a new innovation challenge around tackling bias in AI systems (the dedicated website describes CDEI as a directorate of DSIT, for those studying the finer details of CDEI’s institutional form - we know you’re out there)… and (I think I missed this last time) CDEI have published some research on public perceptions towards the use of foundation models in the public sector (the public are open to their use, but want human accountability).
In non-AI news (such a thing exists?), DSIT has published an outcomes monitoring framework for the Plan for Digital Regulation. The Geospatial Commission has a new interim director, and a new report highlighting the power of location data in the safe deployment of connected and self-driving road vehicles.
Policing minister generated controversy at party conference when he suggested the UK’s passport database could be used to help catch criminals - other politicians and campaign groups reacted by calling for a ban on facial recognition (and in the interview linked above, AI minister Viscount Camrose thinks there are some use cases facial recognition shouldn’t be anywhere near).
Over at the Cabinet Office… deputy prime minister Oliver Dowden told the Future Investment Initiative (aka ‘Davos in the desert’) that the next big global shock could be a ‘tech shock’ that could make what we’ve seen so far look like ‘relative skirmishes’… minister for the Cabinet Office Jeremy Quin gave a speech about digital transformation as government said it would recruit 2,500 ‘ambitious tech talents’ to digital roles bu June 2025… and another Jeremy, Chancellor Hunt ‘suggested that AI could be used by teachers to mark papers, by police officers to prevent crimes and by doctors and nurses to diagnose illnesses’.
Conference calls
It’s now been a few weeks since the end of party conference season; my body has sufficiently adjusted to processing less alcohol, more normal hours and having access to salad again.
If you weren’t able to make it to Manchester/Liverpool/Bournemouth/Brighton or elsewhere, it’s worth checking our full listing of data and AI events to see which ones have been recorded (the IfG panel discussions I chaired on AI at Conservatives and Labour were, for example).
At Conservative conference, Rishi Sunak’s speech touched on the importance of innovation but will be best remembered for HS2 rather than anything on data and AI; Michelle Donelan’s ran through what DSIT has done since its creation but it was the depoliticising woke science section which garnered the headlines. I also caught Cabinet Office minister, Alex Burghart, talking about some pilots of generative AI inside government.
At Green conference, we held a great event with former leader, Natalie Bennett, and Andy Stirling, professor of science and tech policy at the University of Sussex. You can catch up on our live tweets while you await a full write up. The conference passed a motion on AI, saying good governance rather than prevention should be the aim.
At Labour conference, we held a discussion with the Fabian Society and a brilliant panel, including shadow AI minister Matt Rodda MP, to launch our ‘progressive vision’ which has contributions from across civil society. Both the event and the pamphlet are full of ideas for what a Labour government should think about and do around data and AI. Again, catch up on the live tweets - a full write up will follow.
Elsewhere at Labour conference… a motion on AI and technology in the workplace from the Unite and CWU unions passed… new shadow DSIT secretary Peter Kyle trailed and delivered his first speech in post… Keir Starmer touched on technology and the economy, and for health, in his speech … an apparent deepfake of Starmer was widely condemned, with one shadow minister worried about how resources like Hansard could make future fakes easier and a suggestion of legislation to tackle the problem … policy announcements included money for health tech and more certain R&D funding… and in general, Politico thought pro-innovation frontbench messages might cause problems with the unions.
Parly-vous data?
What’s the collective noun for a group of regulators? Because one of those - consisting of Ofcom, the ICO, the CMA, the FCA and the Digital Regulation Cooperation Forum - was up in front of the Science, Innovation and Technology select committee yesterday discussing AI governance. Ofcom’s Melanie Dawes is also one of those appearing before the Public Accounts Committee talking online safety regulation today.
POST, the Parliamentary Office of Science and Technology, is looking for contributions to its new note on ‘use of artificial intelligence in education delivery and assessment’ until the end of November.
Over in the Lords, the Communications and Digital Committee has been taking evidence on large language models and was unimpressed by the government response to its report on digital exclusion, while in the Chamber there was a discussion of AI the importance of public engagement - and metaphorical sandwiches.
Labour movement
Most of these are covered in our ‘Conference call’ section, above, including our new pamphlet (and longer doc) on what a progressive vision for data and AI policy could look like.
Elsewhere, LabourList summarised the policy outputs from the National Policy Forum… the Telegraph doesn’t think mooted proposed legislation from any incoming Labour government will focus on data/digital… and ‘cutting red tape to speed up the adoption of new artificial intelligence, which can rapidly read scans and interpret X-rays’ is part of the plan for reforming the health service.
In brief
- Lots on health data, with Chris Whitty writing about how NHS data could improve healthcare … NHS England investing in public engagement (which we have some thoughts about) and writing a bit about the Federated Data Platform … the health secretary turning to AI to save the NHS… the lead for NHS Login outlining plans…
- The Central Digital and Data Office published an update to its 2022-25 roadmap for digital transformation … and is seeking a senior duo of experts to ‘help government use AI safely and effectively’ … while entries for the Civil Service Data Challenge are open (until 1 November)… as ‘AI chatbots do work of civil servants in productivity trial’… and the National statistician spoke to the head of GDS about how data can provide ‘answers we could only have dreamed about 20 years ago’
- The Guardian looked at the use of algorithms in government in the UK, as Kensington and Chelsea Council admitted using AI-led surveillance software … while the Institute on Governance (Canada) considered the considered use of AI in government globally…
- “Computer says guilty” - David ALlen Green provides an introduction to the evidential presumption that computers are operating correctly, one of the many aspects of the Post Office Horizon scandal
- Once more unto data breaches, with 23andMe User Data Stolen in Targeted Attack on Ashkenazi Jews, a privacy issue with New York’s transit authority potentially exposing ‘Months Worth of Trip Histories to Stalkers’, and the ICO warning that ‘BCC emails remain public sector data-protection blind spot’
- Sticking with the ICO… they called for bosses to respect staff’s right to privacy if they wish to keep tabs on them at work as one in five UK adults said they thought they’d been monitored by an employer… launched a consultation on draft Data Protection Fining Guidance … and found the former boss of NatWest bank breached Nigel Farage’s privacy
- Data vendor lock-in is one of the stories in the NAO’s report on Homes for Ukraine
- And the CEO of Europe’s largest tech conference resigned over Israel-Hamas comments.
What we’ve been up to
- We’ve been planning lots of Connected Conversations, with one already done on collective data rights (do we need them and what should they look like?), and others to come on open government and data and AI; the potential, and practicalities, of norm entrepreneurship for better data governance; and deliberative governance
- Helena wrote about the language of data literacy, while Maria’s touched on language and law and Emily’s included our recent team retreat
- Jeni spoke at the Global Privacy Assembly in Bermuda, while I gave evidence about government data use to the Covid Inquiry here in London
- Did we mention our progressive vision for data and AI policy?
What everyone else has been up to
- The Ada Lovelace Institute published an evidence review on the use of foundation models in the public sector (with a shorter policy briefing alongside)
- The Civic AI Observatory published their first newsletter which includes, among other things, some links to organisational policies on AI (I’ve also spotted policies from Demos and the BBC)
- Rootcause have studied various frames in the public discourse around AI and concluded that ‘progressive organisations working on AI and digital rights need to move the public conversation about new technologies towards the values we want to underpin digital societies and away from a technical focus on risk mitigation and regulation’
- The Worker Info Exchange have had further success in court, with the District Court of Amsterdam ruling that Uber failed to comply with an order to provide transparency into the automated decision to dismiss two drivers from the UK and Portugal. The case was brought by Worker Info Exchange (WIE) in support of the App Drivers and Couriers Union (ADCU).
- Understanding Patient Data have updated the content on their website including a guide to how patient data is used. The King’s Fund, meanwhile, asked if the NHS can manage without AI
- Which? asked: Are AI chatbots risking a new wave of convincing scams?
- The latest edition of Chatham House’s The World Today focuses on AI, while the ODI have launched a new blog on Medium for their research
- Demos are looking for a new head of CASM
- And women miss out on AI venture capital investment, according to new analysis from the Turing.
Events
- AI Fringe AI Fringe AI Fringe AI Fringe AI Fringe AI Fringe AI Fringe AI Fringe
- Open Data Manchester are holding workshops to refresh the Declaration on Responsible and Intelligent Data Practice
- The 47th Data Bites is taking place at IfG on 8 November (and you can catch up on number 46)
- We sometimes quote a particular line from Jurassic Park within Connected by Data - the Barbican have a screening of the film and an intro from Sandra Wachter answering, ‘When should scientists stop and think about the ethical implications of their work?’
- Reform have a few tech-relevant things coming up
Good reads
- “Things are not normal”, concluded a BBC Research & Development foresight report. I also happened to spot this from the BBC’s Bill Thompson, from earlier this year: ‘When ChatGPT engages with you it’s basically taking a drunkard’s walk through the forest of word frequencies, calling out the names of each tree as it leans on it before staggering onward’
- More on health data, from how to make Britain’s health service AI-ready (The Economist), to data donation and privacy fears (The Guardian), to smart watches not being so smart as heart monitors (New Statesman), to AI therapists (New Statesman)
- It’s crazy how much Transport for London can learn about us from our mobile data (James O’Malley)
- Testing AI or Not: How Well Does an AI Image Detector Do Its Job? (Bellingcat)
- Keep your algorithms away from our love lives (amen, New Statesman)
- The Source of Self-Regard: Toni Morrison on Wisdom in the Age of Information (The Marginalian)
- If alien life is artificially intelligent, it may be stranger than we can imagine (BBC)
And finally: