Data Policy Digest
Hello, and welcome to our fifth Data Policy Digest, bringing you all the latest data and AI policy developments. We’ll be taking a short break while parliamentarians are enjoying their recess (we, like them, will be back in September), but there are a few updates based on things rushed out before the break.
If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via gavin@connectedbydata.org. We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1, Digest #2, Digest #3 and Digest #4.
Data policy developments
Deeply DPDIB
If you thought it would be quiet, with us still waiting for a date for Commons report stage…
‘Changes to the parliamentary timetable have resulted in the Bill not being expected to attain Royal Assent until the Spring of 2024 at the earliest.’ So writes outgoing (as in, leaving post, rather than any comment on his personality) Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner, Fraser Sampson, in his resignation letter.
In further epistolary exploits, parliament’s joint committee on human rights has written to DSIT about its concerns around the Bill, including on the rights of data subjects, automated decision-making, the abolition of the Surveillance Camera and Biometrics Commissioner, and the independence of the ICO. They want a response by 22 September.
The apparently changed timetable and summer lull hasn’t stopped the Guardian and the Observer publishing some pieces looking at the Bill. Connected by Data, the Institute for the Future of Work and the Workers Information Exchange were all quoted in one on protections against ‘robo-firing’ in the gig economy, while ORG feature heavily in another on how the bill ‘favours big business and “shady” tech firms’. (ORG also published a piece on where the Bill fits into the UK’s membership of CPTPP, the Comprehensive and Progressive Agreement for Trans-Pacific Partnership.) The Guardian’s special correspondent has also written that the ‘UK is ill-equipped to protect workers against pitfalls of AI’.
Meanwhile, the European Commission has put some of its concerns about DPDIB in a written parliamentary answer, specifically around the independence of the ICO and the powers given to the Secretary of State around legitimate interests. A guide to the Bill for small businesses from ITPro also touches on adequacy and the interplay between EU and UK regulations.
And if that isn’t enough parliamentary data protection action for you – India’s bill is back. Researcher Divij Joshi has an annotated version and is comparing it with GDPR; lawyer and researcher Smriti Parsheera has a thread; and AccessNow say ‘it’s a‘bad law’.
Bills, bills, bills
Online Safety Bill The Bill has completed its Lords report stage and third reading (where the Lords get to tidy it up before any amendments head back to the Commons) is scheduled for 6 September. Carnegie UK and Lord Richard Allan have summaries of what happened during report stage – Full Fact are pleased at changes to media literacy provisions, while the Minderoo Centre want greater consideration of children’s safety in the metaverse. Meanwhile, the Public Accounts Committee is accepting evidence on ‘preparedness for online safety regulation’, following a recent NAO report. And regular readers will know we’ve touched on researcher access to data several times – X (formally formerly, but still to just about everyone, Twitter) are currently threatening legal action against the Centre for Countering Digital Hate for their use of data about the platform.
Digital Markets, Competition and Consumers Bill This will start report stage in the Commons at some point after the summer. The latest edition of UK in a Changing Europe’s ‘UK-EU regulatory divergence tracker’ notes some divergence from the EU on the digital markets front, partly stemming from likely delays to legislation coming into effect – even if the proposals are ‘strikingly similar’ to the EU’s Digital Markets Act.
Investigatory Powers Act The BBC reports that Apple claims it will remove some services from the UK if the government updates the Investigatory Powers Act 2016 – this coming on top of opposition from Apple and others to encryption measures in the Online Safety Bill. (It’s a shame it’s not still the Regulation of Investigatory Powers Act, allowing me some terrible pun like ‘fear the RIPA’.) The DSIT secretary recently defended the OSB measures. The BBC has another piece looking at tech firms’ threats to quit the UK (even if you might quibble with their definition of ‘tech giants’).
In general The Hansard Society and the Institute for Government both looked at the state of the government’s legislative programme as recess started. We now know that the next session of parliament will start on 7 November 2023 – the OSB will fall if it’s not passed by the end of this session.
AI got ‘rithm
There are some details! About the AI Summit! Matt Clifford, chair of the government’s Advanced Research and Invention Agency, and Jonathan Black, former senior civil servant and now Heywood Fellow at the Blavatnik School of Government, will ‘spearhead preparations’ for the Summit as the PM’s representatives. (There’s also £13m for healthcare research, but no surprise rejoining of the EU.) Civil society has been concerned that we’re being excluded from preparations around the UK government’s forthcoming AI Summit – a concern not assuaged by a tweet with Matt Clifford’s thoughts, which talks about giving ‘countries and companies’ a voice. But apparently we will be consulted, according to a written ministerial answer.
Matt Clifford has also spoken to Bloomberg - key points include the Summit aiming to build a shared understanding of AI risks and a platform for how to mitigate them; a requirement for collaboration between countries, industry and ‘other experts’; an acknowledgement that AI opportunities and risks are real and are already being experienced; and ‘it’s too early to speculate’ about invitees for the summit, including China.
It’s also apparently too early to speculate about the date, which we still await. Politico is hearing early November.
In the US, the White House secured voluntary commitments from seven big AI companies, who ‘are launching the Frontier Model Forum, an industry body focused on ensuring safe and responsible development of frontier AI models’. But there are calls that self-regulation isn’t enough. Senators Lindsey Graham (Republican) and Elizabeth Warren (Democrat) have written about the need for big tech regulation, while the FTC is looking into OpenAI. At the UN, there’s a public call for nominations for a new High-level Advisory Body on Artificial Intelligence.
Speaking of OpenAI, The Atlantic did a big piece this month (sparking a lot of interest in its eyeball-scanning Worldcoin initiative). They’ve also shuttered (I can’t work out if I love or hate that Americanism) their tool designed to detect the use of generative AI (it comes as another warning is sounded about AI-enhanced images posing a ‘threat to democratic processes’, and as the BBC looks at Intel’s deepfake detector). Meta tries to assuage our fears by saying AI’s capabilities are some way behind the hype, as it launched its Llama-v2 tool – though its description of it as ‘open source’ has sparked some online discussion (Tim has a take on a different aspect of AI openness). Over at Google, there’s a warning to, er, Google anything its chatbot comes out with, and DeepMind calls for a UK ‘culture shift’ to become a leading AI power. The FT profiled the ‘Transformers: the Google scientists who pioneered an AI revolution’. The pink paper also wrote about why it’s a problem that existing tech giants are the ones riding the AI wave.
In Barbenheimer discourse, the AI focus has been less on the atomic blonde and more on the atomic bomb. Oppenheimer director Christopher Nolan has been talking about the parallels as Jack Stilgoe worries AI companies may draw the wrong lessons, and Palantir’s CEO describes the creation of AI weapons as ‘our Oppenheimer moment’. (Though this Oppenheimer headline will take some beating.)
There’s also been a fair amount of coverage of some often overlooked problems with AI: the environmental, with Bloomberg looking at ‘thirsty data centres’ (as does The Age) and The Guardian looking at AI in general; and the toll on workers, from moderators in Kenya (also covered by the Guardian, with calls for a parliamentary probe, amid other coverage of other algorithmic harms in the country), to a project in India trying to reward the workers behind AI. AI Now brings together climate and labour as it looks at supply chains and workflows.
In the UK, polling by Ipsos found 2 in 3 (64%) Britons agree the government should create new regulations or laws to prevent the potential loss of jobs due to AI; half (50%) think their job will be affected by AI in the next 12 months, increasing to 64% in the next 5 years; and only 1 in 8 (12%) think AI will create far more new job opportunities than the jobs that are lost. The ODI has published its AI regulation white paper response – which I’ve also summarised in a blogpost. In France, algorithm-supported surveillance cameras are being rolled out. At the Vatican, the Pope has spoken about AI and peace. The BBC has taken a look at AI and the death of art. And I’ll admit to not fully understanding this other BBC headline: Warcraft fans trick AI article bot with Glorbo hoax.
DSIT up and take notice
July saw the end of Chloe Smith’s caretaker tenure as Secretary of State – her last day included parliamentary questions, with AI, decarbonisation technology, AI, broadband and more AI the main topics of the day – with Michelle Donelan returning from maternity leave. Since then, Donelan has been busy tweeting videos about the department’s first six months (which DSIT is celebrating by offering organisations the chance to show off their innovations at its new HQ), what the department’s non-exec directors think, and some new Google Fundamentals training on AI, with a particular focus on business.
On the civil service leadership side: permanent secretary Sarah Munby gave a lecture to the Strand Group at King’s College London. A new director general for digital technologies and telecoms was also announced: Emran Mian’s responsibilities will include playing a ‘central role’ in preparing for the AI Summit, which ‘will agree targeted, rapid, international action to develop the international guardrails needed for the safe and responsible development of AI. Of course, we still await the date.
DSIT and its related organisations have also published a report on how location data can unlock innovation, announced a new fund for science and tech research, and held the first meeting of the Semiconductor Advisory Panel. There were also a few things from ‘take out the trash day’, when departments end up publishing a lot of stuff as the Commons rises, including a consultation on domain names and the beta version of the government’s digital identity trust and attributes framework. You can also see how major government data and digital projects are progressing in the latest Infrastructure and Projects Authority annual report.
Meanwhile over at the Cabinet Office, which has responsibilities for government’s own use of data and AI… minister for the Cabinet Office, Jeremy Quin, spoke about the need for Whitehall to bring in AI and data experts (from business, naturally) to ‘turbocharge the technological skills of civil servants’; the chief technology officer gave an interview to Public Technology; and you still have the best part of a fortnight to apply if you’d like to be government’s new chief data officer.
Deputy Prime Minister Oliver Dowden - former digital secretary and former minister for the Cabinet Office, responsible for government use of data - has told The Times that AI could bring a ‘total revolution’ faster than the Industrial Revolution. A non-paywalled summary suggests he thinks that AI not actually making decisions means it should be fine for government to use - though he did warn of hacking risks.
And outside government, Tory thinktank Onward made some recommendations for reforming Whitehall around science and technology (exempting DSIT from traditional Treasury spend controls and giving the secretary of state power over other departments’ R&D spending plans among them) and Chatham House looks at the rise of scientists into leadership positions in China.
Parly-vous data?
As well as DSIT questions in the Commons, there was a debate on advanced artificial intelligence in the Lords. There’s a lot of Lords interest in the Lords at the moment, with a call for evidence on large language models and an ongoing inquiry into AI in weapons systems, evidence submitted to the latter prompting some very cheery coverage in The i.
Back in the Commons, the no-longer Digital, but still Culture, Media and Sport select committee called for the government to make tackling ‘tech abuse’ a priority, warning that the use of smart technology and connected devices in facilitating domestic abuse is becoming a growing problem in its new report, Connected tech: smart or sinister?. There’s also an open select committee inquiry into the future of transport data (and – government rather than parliament – into generative AI and education at DfE).
Labour movement
Rumours of a reshuffle continue. But have still not come to anything.
In the meantime, shadow digital secretary Lucy Powell has done an interview with the Telegraph, mocking Rishi Sunak as a ‘tech bro’ who won’t stand up to vested interests. Her latest newsletter also notes the importance of ‘harness[ing] the potential of the next technological wave for the benefit of all, not just a privileged few’ and the need ‘to ensure that ordinary working people are not left behind, as they have been before, and that decisions made in this domain consider the interests and well-being of everyone’, and the importance of championing women in tech as Labour Women in Tech officially launched. Powell’s shadow cabinet colleague, shadow home secretary Yvette Cooper, also pledged that Labour would make training AI to spread terrorism a criminal offence.
And elsewhere in Labour land, there’s a lot of interest in regulation, with Progressive Britain wondering what a Labour approach could look like and Unchecked UK looking at public attitudes (‘there are votes to be won for whichever political party is brave enough to stand up for strong, well-enforced rules’).
In brief
-
Once more unto the breach, dear friends… or rather, thrice more: with the Police Service of Northern Ireland (twice), adopted children in Scotland and the Electoral Commission all at the centre of stories on data breaches, a cyberattack in the case of the Electoral Commission. The Belfast Telegraph reckons the PSNI release slipped through five key checks
-
The head of the Secret Intelligence Service, Sir Richard Moore, gave a speech on artificial intelligence and the continued importance of the human factor
-
The Department for Levelling Up, Housing and Communities has published more details about the Office for Local Government (OfLog), designed to use data to support good performance by councils. The IfG thinks there are risks stemming from its competing priorities; New Local thinks the idea that some benchmarking of data, devoid of the social and historical context, will make any difference is ‘delusional’.
-
Politico have taken a look at the revolving door between politics and big tech
-
The National Institute for Health and Care Excellence has produced guidance on using AI to support radiotherapy
-
Babylon Health – they were the future once
-
Responsible AI UK have started launching some funding calls
-
The Met are to expand their use of ‘precision policing’ – their new vision document makes some further mentions of AI – as the Home Office ‘secretly backs facial recognition technology to curb shoplifting’ and ‘UK spy agencies want to relax “burdensome” laws on AI data use’
-
The government will apparently be convening a roundtable about the impact of AI on journalism
-
Is the computer always right? The government has no plans to change the current legal presumption that it is, despite the Post Office Horizon scandal
-
Smart Data Research UK – the UKRI-funded initiative formerly known as Digital Footprints – is seeking views on its strategy
-
And… the ICO has reprimanded NHS Lanarkshire for sharing patient data via WhatsApp.
What we’ve been up to
-
We’ve had enough of blue-light glowing humanoids when it comes to data and AI. So we’ve made a video that puts people front and centre on an issue that is overly dominated by sci-fi visions and corporate interests
-
We’re doing some work with the Wales TUC, investigating how AI is understood and impacting Welsh workers and trade unionists (also on the TUC site)
-
Jeni was in New York for a meeting of the Council for a Fair Data Future
-
Maria was testing a card game designed to help conversations around collective and participatory data governance
-
We’re planning a design lab around open government commitments on transparency, participation and redress, to coincide with the Open Government Summit in Tallinn
-
We’re continuing to support civil society around the Data Protection and Digital Information Bill, AI regulation and much more – and starting to think about what the autumn might look like. Get in touch if you’d like to know more.
What everyone else has been up to
-
It’s been a busy few weeks at the Ada Lovelace Institute, with their bumper new report on regulating AI in the UK, accompanied by analysis from AWO (and political pickup, with handy coverage in Civil Service World and PoliticsHome), plus an explainer on foundation models
-
The Tony Blair Institute’s Future of Britain conference heard a lot about how tech and AI can transform public services – and they published a report on it, too
-
The TUC featured in a BBC News article on workers needing more protection from AI. And the Guardian covers a payout from Uber
-
mySociety have been publishing various viewpoints as part of their Repowering Democracy series
-
Natalie Byrom wrote for the FT on how AI could exacerbate inequality of access to judgments – remember, the Ministry of Justice has an open consultation on open justice
-
Video of Demos’ event on regulating AI is now up, as is a piece in the Guardian about how AI could help reboot democracy
-
BCS coordinated an open letter – it wouldn’t be a Data Policy Digest without one – calling for AI to be recognised as a force for good, rather than an existential threat
-
The Turing have launched an online learning platform
-
Jess Morley published some tips on keeping up with policy developments (we’d also recommend reading the Connected by Data Data Policy Digest, obviously)
-
Nesta and Newspeak House are launching a Civic AI Observatory
-
And the Royal Society is looking for a senior policy adviser on data (applications close at the end of August)
Events
-
As mentioned above, we’ll be in Tallinn around the Open Government Partnership Global Summit in the first week of September
-
AI fest, CogX, is happening the following week
-
Which is when there’s also the next IfG Data Bites (13 September, details to follow), and Public Service Data Live (14 September)
-
And party conference season will soon be upon us. If you’re heading to Liverpool for Labour, keep 1730-1900 on Monday 9 October free – more details to follow. IfG have a couple of events on AI at both Conservative and Labour conferences, and the New Statesman are also looking at harnessing tech for growth
Good reads
-
These Women Tried to Warn Us About AI, says Rolling Stone
-
The Economist thinks your employer is probably unprepared for AI
-
Digital government legend Audrey Tang is one of the contributors to Plurality: Technology for Collaborative Diversity and Democracy
-
Vox talk tech with Black Mirror’s Charlie Brooker
-
Our friends at Which? pointed us towards a report from the Norwegian Consumer Council, _Ghost in the machine: addressing the consumer harms of generative AI
-
What if Generative AI turned out to be a Dud? - Gary Marcus
-
Want to stop harmful tech? Just say no – Ethan Zuckerman for Prospect
-
Why is technology not making us more productive?, asks BBC News
-
The National Preparedness Commission have an explainer on ‘Safety of Artificial Intelligence: Prelude, Principles and Pragmatics’ – National preparedness Commission
-
AI offers huge promise on breast cancer screening, reports BBC News
-
“If It Sounds Like Sci-Fi, It Probably Is”, Emily Bender tells getAbstract
-
The Orwell Foundation’s youth fellows have been excavating the archives of the dystopian state of Digitalis
-
Mainly a listen, but also a read – what happened when politics and technology collided in Allende’s Chile? Check out Evgeny Morozov’s Santiago Boys. And BBC Sounds has a new podcast on what big tech knows about us
And finally…