Data Policy Digest

Gavin Freeguard

Gavin Freeguard

Hello, and welcome to our fifth Data Policy Digest, bringing you all the latest data and AI policy developments. We’ll be taking a short break while parliamentarians are enjoying their recess (we, like them, will be back in September), but there are a few updates based on things rushed out before the break.

If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1, Digest #2, Digest #3 and Digest #4.

Data policy developments

Deeply DPDIB

If you thought it would be quiet, with us still waiting for a date for Commons report stage

‘Changes to the parliamentary timetable have resulted in the Bill not being expected to attain Royal Assent until the Spring of 2024 at the earliest.’ So writes outgoing (as in, leaving post, rather than any comment on his personality) Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner, Fraser Sampson, in his resignation letter.

In further epistolary exploits, parliament’s joint committee on human rights has written to DSIT about its concerns around the Bill, including on the rights of data subjects, automated decision-making, the abolition of the Surveillance Camera and Biometrics Commissioner, and the independence of the ICO. They want a response by 22 September.

The apparently changed timetable and summer lull hasn’t stopped the Guardian and the Observer publishing some pieces looking at the Bill. Connected by Data, the Institute for the Future of Work and the Workers Information Exchange were all quoted in one on protections against ‘robo-firing’ in the gig economy, while ORG feature heavily in another on how the bill ‘favours big business and “shady” tech firms’. (ORG also published a piece on where the Bill fits into the UK’s membership of CPTPP, the Comprehensive and Progressive Agreement for Trans-Pacific Partnership.) The Guardian’s special correspondent has also written that the ‘UK is ill-equipped to protect workers against pitfalls of AI’.

Meanwhile, the European Commission has put some of its concerns about DPDIB in a written parliamentary answer, specifically around the independence of the ICO and the powers given to the Secretary of State around legitimate interests. A guide to the Bill for small businesses from ITPro also touches on adequacy and the interplay between EU and UK regulations.

And if that isn’t enough parliamentary data protection action for you – India’s bill is back. Researcher Divij Joshi has an annotated version and is comparing it with GDPR; lawyer and researcher Smriti Parsheera has a thread; and AccessNow say ‘it’s a‘bad law’.

Bills, bills, bills

Online Safety Bill The Bill has completed its Lords report stage and third reading (where the Lords get to tidy it up before any amendments head back to the Commons) is scheduled for 6 September. Carnegie UK and Lord Richard Allan have summaries of what happened during report stage – Full Fact are pleased at changes to media literacy provisions, while the Minderoo Centre want greater consideration of children’s safety in the metaverse. Meanwhile, the Public Accounts Committee is accepting evidence on ‘preparedness for online safety regulation’, following a recent NAO report. And regular readers will know we’ve touched on researcher access to data several times – X (formally formerly, but still to just about everyone, Twitter) are currently threatening legal action against the Centre for Countering Digital Hate for their use of data about the platform.

Digital Markets, Competition and Consumers Bill This will start report stage in the Commons at some point after the summer. The latest edition of UK in a Changing Europe’s ‘UK-EU regulatory divergence tracker’ notes some divergence from the EU on the digital markets front, partly stemming from likely delays to legislation coming into effect – even if the proposals are ‘strikingly similar’ to the EU’s Digital Markets Act.

Investigatory Powers Act The BBC reports that Apple claims it will remove some services from the UK if the government updates the Investigatory Powers Act 2016 – this coming on top of opposition from Apple and others to encryption measures in the Online Safety Bill. (It’s a shame it’s not still the Regulation of Investigatory Powers Act, allowing me some terrible pun like ‘fear the RIPA’.) The DSIT secretary recently defended the OSB measures. The BBC has another piece looking at tech firms’ threats to quit the UK (even if you might quibble with their definition of ‘tech giants’).

In general The Hansard Society and the Institute for Government both looked at the state of the government’s legislative programme as recess started. We now know that the next session of parliament will start on 7 November 2023 – the OSB will fall if it’s not passed by the end of this session.

AI got ‘rithm

There are some details! About the AI Summit! Matt Clifford, chair of the government’s Advanced Research and Invention Agency, and Jonathan Black, former senior civil servant and now Heywood Fellow at the Blavatnik School of Government, will ‘spearhead preparations’ for the Summit as the PM’s representatives. (There’s also £13m for healthcare research, but no surprise rejoining of the EU.) Civil society has been concerned that we’re being excluded from preparations around the UK government’s forthcoming AI Summit – a concern not assuaged by a tweet with Matt Clifford’s thoughts, which talks about giving ‘countries and companies’ a voice. But apparently we will be consulted, according to a written ministerial answer.

Matt Clifford has also spoken to Bloomberg - key points include the Summit aiming to build a shared understanding of AI risks and a platform for how to mitigate them; a requirement for collaboration between countries, industry and ‘other experts’; an acknowledgement that AI opportunities and risks are real and are already being experienced; and ‘it’s too early to speculate’ about invitees for the summit, including China.

It’s also apparently too early to speculate about the date, which we still await. Politico is hearing early November.

In the US, the White House secured voluntary commitments from seven big AI companies, who ‘are launching the Frontier Model Forum, an industry body focused on ensuring safe and responsible development of frontier AI models’. But there are calls that self-regulation isn’t enough. Senators Lindsey Graham (Republican) and Elizabeth Warren (Democrat) have written about the need for big tech regulation, while the FTC is looking into OpenAI. At the UN, there’s a public call for nominations for a new High-level Advisory Body on Artificial Intelligence.

Speaking of OpenAI, The Atlantic did a big piece this month (sparking a lot of interest in its eyeball-scanning Worldcoin initiative). They’ve also shuttered (I can’t work out if I love or hate that Americanism) their tool designed to detect the use of generative AI (it comes as another warning is sounded about AI-enhanced images posing a ‘threat to democratic processes’, and as the BBC looks at Intel’s deepfake detector). Meta tries to assuage our fears by saying AI’s capabilities are some way behind the hype, as it launched its Llama-v2 tool – though its description of it as ‘open source’ has sparked some online discussion (Tim has a take on a different aspect of AI openness). Over at Google, there’s a warning to, er, Google anything its chatbot comes out with, and DeepMind calls for a UK ‘culture shift’ to become a leading AI power. The FT profiled the ‘Transformers: the Google scientists who pioneered an AI revolution’. The pink paper also wrote about why it’s a problem that existing tech giants are the ones riding the AI wave.

In Barbenheimer discourse, the AI focus has been less on the atomic blonde and more on the atomic bomb. Oppenheimer director Christopher Nolan has been talking about the parallels as Jack Stilgoe worries AI companies may draw the wrong lessons, and Palantir’s CEO describes the creation of AI weapons as ‘our Oppenheimer moment’. (Though this Oppenheimer headline will take some beating.)

There’s also been a fair amount of coverage of some often overlooked problems with AI: the environmental, with Bloomberg looking at ‘thirsty data centres’ (as does The Age) and The Guardian looking at AI in general; and the toll on workers, from moderators in Kenya (also covered by the Guardian, with calls for a parliamentary probe, amid other coverage of other algorithmic harms in the country), to a project in India trying to reward the workers behind AI. AI Now brings together climate and labour as it looks at supply chains and workflows.

In the UK, polling by Ipsos found 2 in 3 (64%) Britons agree the government should create new regulations or laws to prevent the potential loss of jobs due to AI; half (50%) think their job will be affected by AI in the next 12 months, increasing to 64% in the next 5 years; and only 1 in 8 (12%) think AI will create far more new job opportunities than the jobs that are lost. The ODI has published its AI regulation white paper response – which I’ve also summarised in a blogpost. In France, algorithm-supported surveillance cameras are being rolled out. At the Vatican, the Pope has spoken about AI and peace. The BBC has taken a look at AI and the death of art. And I’ll admit to not fully understanding this other BBC headline: Warcraft fans trick AI article bot with Glorbo hoax.

DSIT up and take notice

July saw the end of Chloe Smith’s caretaker tenure as Secretary of State – her last day included parliamentary questions, with AI, decarbonisation technology, AI, broadband and more AI the main topics of the day – with Michelle Donelan returning from maternity leave. Since then, Donelan has been busy tweeting videos about the department’s first six months (which DSIT is celebrating by offering organisations the chance to show off their innovations at its new HQ), what the department’s non-exec directors think, and some new Google Fundamentals training on AI, with a particular focus on business.

On the civil service leadership side: permanent secretary Sarah Munby gave a lecture to the Strand Group at King’s College London. A new director general for digital technologies and telecoms was also announced: Emran Mian’s responsibilities will include playing a ‘central role’ in preparing for the AI Summit, which ‘will agree targeted, rapid, international action to develop the international guardrails needed for the safe and responsible development of AI. Of course, we still await the date.

DSIT and its related organisations have also published a report on how location data can unlock innovation, announced a new fund for science and tech research, and held the first meeting of the Semiconductor Advisory Panel. There were also a few things from ‘take out the trash day’, when departments end up publishing a lot of stuff as the Commons rises, including a consultation on domain names and the beta version of the government’s digital identity trust and attributes framework. You can also see how major government data and digital projects are progressing in the latest Infrastructure and Projects Authority annual report.

Meanwhile over at the Cabinet Office, which has responsibilities for government’s own use of data and AI… minister for the Cabinet Office, Jeremy Quin, spoke about the need for Whitehall to bring in AI and data experts (from business, naturally) to ‘turbocharge the technological skills of civil servants’; the chief technology officer gave an interview to Public Technology; and you still have the best part of a fortnight to apply if you’d like to be government’s new chief data officer.

Deputy Prime Minister Oliver Dowden - former digital secretary and former minister for the Cabinet Office, responsible for government use of data - has told The Times that AI could bring a ‘total revolution’ faster than the Industrial Revolution. A non-paywalled summary suggests he thinks that AI not actually making decisions means it should be fine for government to use - though he did warn of hacking risks.

And outside government, Tory thinktank Onward made some recommendations for reforming Whitehall around science and technology (exempting DSIT from traditional Treasury spend controls and giving the secretary of state power over other departments’ R&D spending plans among them) and Chatham House looks at the rise of scientists into leadership positions in China.

Parly-vous data?

As well as DSIT questions in the Commons, there was a debate on advanced artificial intelligence in the Lords. There’s a lot of Lords interest in the Lords at the moment, with a call for evidence on large language models and an ongoing inquiry into AI in weapons systems, evidence submitted to the latter prompting some very cheery coverage in The i.

Back in the Commons, the no-longer Digital, but still Culture, Media and Sport select committee called for the government to make tackling ‘tech abuse’ a priority, warning that the use of smart technology and connected devices in facilitating domestic abuse is becoming a growing problem in its new report, Connected tech: smart or sinister?. There’s also an open select committee inquiry into the future of transport data (and – government rather than parliament – into generative AI and education at DfE).

Labour movement

Rumours of a reshuffle continue. But have still not come to anything.

In the meantime, shadow digital secretary Lucy Powell has done an interview with the Telegraph, mocking Rishi Sunak as a ‘tech bro’ who won’t stand up to vested interests. Her latest newsletter also notes the importance of ‘harness[ing] the potential of the next technological wave for the benefit of all, not just a privileged few’ and the need ‘to ensure that ordinary working people are not left behind, as they have been before, and that decisions made in this domain consider the interests and well-being of everyone’, and the importance of championing women in tech as Labour Women in Tech officially launched. Powell’s shadow cabinet colleague, shadow home secretary Yvette Cooper, also pledged that Labour would make training AI to spread terrorism a criminal offence.

And elsewhere in Labour land, there’s a lot of interest in regulation, with Progressive Britain wondering what a Labour approach could look like and Unchecked UK looking at public attitudes (‘there are votes to be won for whichever political party is brave enough to stand up for strong, well-enforced rules’).

In brief

What we’ve been up to

What everyone else has been up to


Good reads

And finally…

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more