Data Policy Digest

Gavin Freeguard

Gavin Freeguard

Hello, and welcome to our fourteenth Data Policy Digest, bringing you all the latest data and AI policy developments. The Budget, national insurance cuts, tax cuts, and a possible May election are just some of the topics dominating the political fiscal discourse and doubling as an SEO-friendly introduction to this fortnight’s newsletter, a blockbuster on a par with anything at this weekend’s Oscars and with more political intrigue than Super Tuesday or a day in the life of Christian Horner at Red Bull. If that doesn’t game Google, nothing will.

If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via gavin@connectedbydata.org. We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on previous Digests.

To receive the next edition of the Data Policy Digest direct to your inbox sign up here.

Data policy developments

Deeply DPDIB

As we thought last time, the Bill will start its journey through Lords grand committee on 20 March. Woo, and indeed, hoo.

Lots of amendments - 84 pages’ worth as of 1 March - continue to go down (a reminder that only those agreed unanimously will be added in grand committee - the rest will have to wait for report stage).

Those include several on algorithmic transparency reporting. A parliamentary question gives a little more detail on the government’s recent pledge (in its response to the AI white paper consultation) to make this mandatory - Public Technology has a summary. Government also appears to have a new format for the few records so far published. Our friends over at the Public Law Project have a longer list and a longstanding interest in this area.

Meanwhile, Big Brother Watch led more than 40 organisations (including Connected by Data) in ‘demanding the Government scrap new plans to monitor the bank accounts of millions’, as part of clauses added at the last moment in the Commons and generating widespread controversy, including in the Lords. The Guardian has a write-up.

In other news… Sky News has looked at the implications of DPDIB for the use of data in elections, with Voters braced for Facebook and Instagram messages as political ad spend risesPresident Biden Issues Executive Order to Protect Americans’ Sensitive Personal Data (White House)… Concerns raised over UK Data Protection Bill’s impact on EU’s GDPR (Euractiv)… James O’Malley has written for PoliticsHome about the National Underground Asset Register, which also features in the Bill (the government has a page on it too)… and we’re interested to see the MoJ announces a ‘New law to make justice more accessible for innocent people wronged by powerful companies’, given many of us would like to see collective rights in DPDIB - that’s actually one area we raise in our brand new briefing on the Bill for Lords committee stage.

We’ve also written up our recent civil society workshop, ‘Datafied’, which highlighted many concerns about the Bill while discussing data and AI policy more widely.

Bills, bills, bills

Online Safety Act The BBC’s Sunday with Laura Kuenssberg continues a focus on online safety, this week interviewing Mariano Janin, father of Mia Janin who took her own life after being bullied… Esther Ghey, mother of Brianna, met Prime Minister Rishi SunakBanning phones in schools is just another ploy to distract us, says Eva Wiseman in The Observer… Ofcom’s former head of online safety has decided to step down after controversy over some social media posts… Ofcom has made a positive start but still has a lot of work to do on the online safety regime, says the Public Accounts Committee… while the OSA could pose some challenges to the development of quantum technologies, according to the Regulatory Horizons Council (h/t to the eagle-eyed Politico team).

Digital Markets, Competition and Consumers Bill Report Stage kicks off on 11 March in the Lords.

Other The Media Bill awaits a date for Lords committee stage, though Ofcom has published a roadmap for how it would regulate based on the Bill… it’s Commons committee next for the Automated Vehicles Bill… the Investigatory Powers (Amendment) Bill, which several civil society organisations are concerned about, starts Commons committee stage today, with several amendments down… and in the EU, ‘As of today, big platforms designed as “gatekeepers” must comply with precise obligations — or face heavy sanctions’ under the Digital Markets Act.

AI got ‘rithm

Remember the AI Safety Summit last November? Remember all the other spin-off and fringe events it prompted? Well the organisers of the Fringe have just published Perspectives from the AI Fringe, said perspectives coming from a range of organisations across civil society, industry and academia that were involved. It also features our People’s Panel. We should be approaching the virtual South Korean follow-on summit soon, with the next in-person event - later this year or perhaps early 2025 - in France.

In other UK AI news… note the spring date on the NAO’s forthcoming report on the use of AI in government… law firm Lewis Silkin have looked at the government’s AI white paper response through a work lens… and Politico spotted that the timeline for an expected CMA update on the AI Foundation Models: initial review has slipped a bit.

More internationally… Should the UN govern global AI? (Brookings)… France means business with Mistral-Microsoft deal (Politico)… the European AI Office is up and running… while in the US, er, This agency is tasked with keeping AI safe. Its offices are crumbling (Washington Post)…

The depressing deepfake paragraph… Instagram owner Meta forms team to stop AI from tricking voters (BBC) … Piers Morgan and Oprah Winfrey ‘deepfaked’ for US influencer’s ads (BBC)… Revealed: the names linked to ClothOff, the deepfake pornography app (The Guardian)… UK’s enemies could use AI deepfakes to try to rig election, says James Cleverly (The Guardian)… Trump supporters target black voters with faked AI images (BBC)… the Center for Countering Digital Hate ‘tested popular AI image tools like Midjourney & found that they generated election disinformation in 41% of tests overall’… New technology to show why images and video are genuine launches on BBC News (BBC)… ‘AI godfather’, others urge more deepfake regulation in open letter (Reuters)… while public affairs agency Charlesbye tested some deepfakes on a focus group… although, How AI and democracy can fix each other (Divya Siddarth - with forthcoming event)…

The depressing environment paragraph… AI boom sparks concern over Big Tech’s water consumption (FT)… Generative AI’s environmental costs are soaring — and mostly secret (Nature)… a new project, Carbolytics, is looking at the carbon cost of tracking cookies… AI IS TAKING WATER FROM THE DESERT (The Atlantic)… though the Center for Data Innovation thinks we need to rethink concerns about AI’s energy use (Center for Data Innovation)…

The depressing creative/media industries/work paragraph… Reach using AI to speed up ‘ripping’ and use same article on multiple sites (Press Gazette)… Your next job interview could be with a bot (Fast Company)… UK publishers tell Parliament: Stop AI using our content without permission (Press Gazette)… What Happens When Your Art Is Used to Train AI (The Markup)… An update on the BBC’s plans for Generative AI (Gen AI) and how we plan to use AI tools responsibly (BBC)… Tumblr’s owner is striking deals with OpenAI and Midjourney for training data, says report (The Verge)… Uber CEO denies pricing algorithm uses ‘behavioural patterns’ (Computer Weekly)… Amazon Turkers Who Train AI Say They’re Locked Out of Their Work and Money (404 Media)… though, ‘My AI twin may get me more modelling work’ (BBC News)…

The OpenAI/Microsoft have had a quiet few weeks paragraph… OpenAI accuses New York Times of paying someone to hack ChatGPT (TheStreet, Michael Veale notes a similar Meta angle)… Microsoft Seeks to Dismiss Parts of Suit Filed by The New York Times, reports the New York Times… SEC Investigating Whether OpenAI Investors Were Misled (WSJ)… Elon Musk sues OpenAI for abandoning original mission for profit (Reuters, though OpenAI have hit back)… ChatGPT goes ‘off the rails’ with gibberish answers (VentureBeat)… Microsoft engineer warns company’s AI tool creates violent, sexual images, ignores copyrights (CNBC)… there’s still action from the Italian data protection authority for them to think about (h/t Valentina)… why we should nationalise OpenAI (Marc Andreessen)…

The as have the other tech giants paragraph… Why Google’s ‘woke’ AI problem won’t be an easy fix (BBC News)… Black Nazis? A woman pope? That’s just the start of Google’s AI problem. (Vox)… Google Trims Jobs in Trust and Safety While Others Work ‘Around the Clock’ (Bloomberg)… Ex-Google engineer charged with stealing AI secrets (BBC News)… AI chip firm Nvidia valued at $2tn (BBC)… Google Is Paying Publishers to Test an Unreleased Gen AI Platform (Adweek)… The Machiavelli of Meta Inside Nick Clegg’s lobbying operation (UnHerd)…

The oh dear god this is so bad it deserves its own paragraph paragraph, on Gemini and women in tech

The other interesting links which didn’t quite fit anywhere else paragraph… Exclusive: Public trust in AI is sinking across the board (Axios)… Top AI researchers say OpenAI, Meta and more hinder independent evaluations (Washington Post)… Algorithmic Attention Rents research (UCL)… Here Come the AI Worms (Wired)… We Tested an AI Tutor for Kids. It Struggled With Basic Math (Wall Street Journal)… What could you do with 100 new team members? (Adam Kucharski)… The Artificial Human podcast (BBC Sounds)… On AI ‘safety’ (Deb Raji)… Will AI rob us of our humanity? (Prospect)…

And the light relief paragraph… an old clip of Steve Jobs on AI as a Q&A with Aristotle showing a questionable grasp of how history works… and Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse (The Guardian).

Let’s get fis-cal, fis-cal

The government’s polling situation is dire - might a fiscal event… budge it?

There was lots of pre-briefing about the likely data/AI-related announcements, including from government itself… £1.8 billion benefits through public sector productivity drive (HMT) and ‘£360 million to boost British manufacturing and R&D’ (DSIT)… AI and drones in £800m Budget technology package (BBC)… Budget: Funding for AI and DWP service modernisation in public sector productivity drive (CSW)… Britain’s AI sector expected to get £100m extra funding in budget (The Guardian)… that all came around deputy PM Oliver Dowden’s speech on AI for public good (‘AI is potentially - and I don’t say this lightly - a “silver bullet”’), prompting stories like UK government to trial ‘red box’ AI tools to improve ministerial efficiency (FT)… Dowden: Embracing AI ‘only sustainable route’ to cutting civil service headcount (CSW)… and a critique from Emily Bender… also h/t Jen for this I’d not seen before, Automating Public Services: Learning from Cancelled Systems.

Here’s what a quick glance through the budget bumf turned up… in his speech, the Chancellor said he would fund the NHS productivity plan in full, which includes digital transformation ranging from using AI ‘to cut down and potentially cut in half form filling by doctors’, electronic patient records in all hospitals and improving the NHS app (the main document has more, including an acceleration of the Federated Data Platform)… a Centre for Police Productivity will help ‘improve police data quality and enable forces to implement promising technologies’ including facial recognitionsupport for Open Referral UK and their data standard, for better information about community services… asking local authorities for their plans on utilising data and technology£100m over five years for the Alan Turing Institute… AI document processing, digitising prison services and jury bundles and introducing an offender risk management tool ‘to provide more robust, data-driven decisions’… outlining how public compute facilities will be accessed, a £7.4 million AI Upskilling Fund pilot for SMEs, and ‘two new data pilots to drive high quality AI in education and improve access to data in adult social care for a total of £3.5 million’… ‘making greater use of cutting-edge technology, such as AI, across the public sector’, including more than doubling the size of i.AI and deploying AI to combat fraud, and a strategic vision for the use of AI in the public sector… and accelerating smart data schemes in energy and transport through targeted funding for consultations and calls for evidence…

Ada were quick out of the blocks with a response: while welcoming further investment, they say ‘there must be more transparency around the proposed pilots, urgent progress on governance, and assurance that the views of frontline professionals and the public will be centred throughout’. Health data expert Jess Morley has a thread on the health bits, while techUK have a useful summary too.

And remember that Budgets tend to unravel in the days afterwards as people have the time to comb through the detail (in the Treasury documentation, OBR analysis and elsewhere), so keep an eye out for further reaction and reflection.

DSIT up and take notice

‘Michelle Donelan pays damages to academic over Hamas claim’ is the BBC headline, the DSIT secretary paying an undisclosed sum to Professor Kate Sang after writing to UKRI about her concerns on views Sang and Dr Kamna Patel - members of a UKRI advisory group - had expressed. A UKRI investigation has found no evidence of any breach by the academics. Donelan’s statement makes no apology and does not mention the financial settlement - using public money. Bindmans, representing the academics, also released a statement, while Labour’s Peter Kyle called for Donelan to ‘urgently prove’ she has the confidence of the research community and the Lib Dems have called for an inquiry.

Donelan has also deepened collaboration on AI and research with France, signed a memorandum of understanding with Australia on online safety and wrote for The House - With its start-up mindset, DSIT is delivering rapid change… while science minister Andrew Griffith has been in Saudi Arabia, signing a memorandum of understanding on science and research and giving a speech in Riyadh… Saqib Bhatti gave a speech on the ‘UK Approach to Digital Standards: upholding integrity, accelerating innovation’… and perm sec Sara Munby gives a lecture this evening on policy making for the age of AI, having become involved in one of the Post Office Horizon scandal storylines

DSIT has also… published findings from a public dialogue on trust in digital identity services… its response to its ‘Powers in relation to UK-related domain name registries’ consultation… some research on ‘Testing the impact of algorithmic rankings on consumer choice’… a call for evidence for the McPartland Review of Cyber Security & Economic Growth… a competition for AI safety researcher exchange with Canada… a guide to DSIT’s telecoms research, development and innovation initiatives… a response to a Regulatory Horizons Council report on quantum… and a secure connected places playbook

The department has also published its Areas of Research Interest, a useful document telling the outside world (particularly academia) which areas it needs evidence on. There’s a collection for other departments, and a tool from UKRI and others making them easier to search.

As for data and digital in government, CDDO has published new spend control guidance (Public Technology summarises that and why it matters), including on how to Meet the requirements of data privacy regulations (though Owen Boswarva is unimpressed)… EXCL: Cabinet Office works on plan to ‘centralise and standardise’ provision of laptops and phones for all civil servants (Public Technology - our Progressive Vision has something on the importance of good tech for public servants, too)… UK minister defends government’s rebranded Counter Disinformation Unit (Global Government Forum)…

In stats-land… there are 20 projects up and running on the Integrated Data Service… the Office for Statistics Regulation’s Ed Humpherson discusses the divergence between internal positivity and external scepticism about analysis in Government and the importance of transparency, off the back of the PACAC evidence base inquiry… and the OSR has published new guidance on sex and gender identity

The ICO have apparently had their Weetabix (other catalysing breakfast cereals are available), finding the Home Office’s GPS migrant-monitoring broke the lawordering Serco Leisure to stop using facial recognition technology to monitor attendance of leisure centre employees (the Guardian has a story)… issuing a second generative AI call for evidence, on ‘Purpose limitation in the generative AI lifecycle’… publishing Biometric data guidance: Biometric recognition… and the Commissioner calling for senior leaders to take transparency seriously (following action against five public authorities on freedom of information) and speaking at the IAPP conference

Elsewhere in government… as noted above, the Home Secretary has been talking with tech, Ministers engage with big tech to tackle threat to democracy (Public Technology) and Home Secretary Meeting Big Tech To Collaborate On Election Year Challenges (PoliticsHome)… ‘Major Chinese hack’ on Foreign Office urgently investigated by UK spies (the I)… the NAO has published Digital transformation in government: a guide for senior leaders and audit and risk committees… and in the US, a new Workforce of The Future playbook from the Office of Personnel Management includes AI and data-driven decisions.

Parly-vous data?

SInce our last edition, there have been departmental question times for DSIT (21 February, with a fair bit on AI regulation) and the Cabinet Office (29 February, with a bit on digital government services). DCMS questions on 22 February included one on AI and the creative industries. The next DSIT questions take place on Wednesday 17 April, Cabinet Office on Thursday 25 April, and DCMS on Thursday 18 April.

Also in the Commons… the Levelling Up select committee took oral evidence on the data-heavy Office for Local Government (19 February)… the Transport select committee asked how data could improve rail and road infrastructure (21 February)… the Defence select committee had two evidence sessions on developing AI capacity and expertise in UK Defence (20 and 27 February)… the Science, Innovation and Technology committee examined cyber resilience of the UK’s critical national infrastructure (21 February)… there was an adjournment debate on social media access in prisons (26 February)… Justin Madders MP had a Westminster Hall debate on digital exclusion (28 February)… and there was a written ministerial statement, Transforming for a Digital Future: February 2024 progress update (29 February)…

The Lords discussed a question on life science businesses from Baroness (Maggie) Jones (26 February) and a statutory instrument on the ‘immigration exemption’ amending the Data Protection Act (5 March)… the Communications and Digital Committee held a couple of evidence sessions on ‘Future of news: impartiality, trust and technology’, with a few more coming up… and the National Security Strategy (Joint Committee) is still taking evidence on Defending Democracy, with plenty on AI, until 18 March…

Coming up: in the Commons, PACAC takes more evidence on its Transforming the UK’s Evidence Base inquiry (12 March), quizzing Denise Lievesley - currently leading a long-awaited review of the UK Statistics Authority - and minister, Baroness Neville-Rolfe… the Commons education committee looks at screen time the same day… and there’s another evidence session on defence and AI on 19 March…

In the Lords, Michelle Donelan will be hoping not to libel anyone in front of the Science and Technology committee on 12 March… Lord Blunkett has a question, on making electronic point-of-sale payment devices accessible for those with a visual impairment, such as via tactile keypad, on 19 March… Lord Holmes’ AI Bill gets a second reading (it’s unlikely to go further, as a private member’s bill), on 22 March… and the Lords Communications and Digital Committee are expecting a government response to their report on LLMs by 2 April.

And… ControlAI has a list of every parliamentarian supporting its campaign… while you may have noticed there was a byelection in Rochdale last week - data expert Jon Baines ponders How did George Galloway come to send different canvassing info to different electors?… separately, in Croydon, Labour has apparently reported a data breach.

Labour movement

Shadow DSIT secretary Peter Kyle did an interview with the I, ‘saying Labour will make mandatory the rules on AI development that are currently voluntary’ and claiming ‘ministers were not being transparent on how far AI was already being used in government departments’, and pledging to deal with the disruption AI might cause, including in the workplace…

Shadow education secretary Bridget Phillipson entered the debate about children and smartphones (which she wouldn’t ban)… LabourList have some details about the use of new tech in the Wellingborough byelection campaign… Labour puts lobbyists on the ballot – and big business is the winner (openDemocracy)… former Labour and then tech company adviser Theo Bertram is the new head of the Social Market Foundation thinktank… and there’s a lot of activity looking to shape how Labour might approach government, including the Radical How from Public Digital and Nesta, and a new project on mission-driven government from the Future Governance Forum, UCL’s IIPP and Camden Council…

Perhaps the biggest relevant Labour news over the last fortnight was around citizens’ juries/assemblies - we at Connected by Data, and several other groups, would of course like to see more public participation in data and AI policy and governance. Sue Gray, Keir Starmer’s chief of staff, expressed support for citizens’ juries (also picked up by shadow health secretary, Wes Streeting) - which the Telegraph reports was quickly walked back… but not before some useful mythbusting from Involve, an expert take from Ipsos, and a piece from Nesta’s Centre for Collective Intelligence Design, among others.

On the Horizon

Unregulated AI could cause the next Horizon scandal, Ada’s Fran Bennett wrote in the New Statesman… while the Public Law Project asked, How can government avoid the next Horizon scandal?

In brief

What we’ve been up to

What everyone else has been up to

Events

Good reads

And finally:

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more