Data Policy Digest
Hello, and welcome to our ninth Data Policy Digest, bringing you all the latest data and AI policy developments. We have a wrap up of the AI Summit - and fully endorse this sentiment - but there’s also a lot about to happen, Bills-wise.
If there’s something we’ve missed, something you’re up to that you’d like us to include next time or you have any thoughts on how useful the Digest is or could be, please get in touch via gavin@connectedbydata.org. We’re on Twitter @ConnectedByData and @DataReform. You can also catch up on Digest #1, Digest #2, Digest #3, Digest #4, Digest #5, Digest #6, Digest #7 and Digest #8.
To receive the next edition of the Data Policy Digest direct to your inbox sign up here.
Data policy developments
Deeply DPDIB
The King’s Speech (the first since 2010 … sorry, since 1951) outlined the legislative agenda for the remainder of this parliament (drama aside, this is likely to be the last session before the general election, which must happen by the end of January 2025).
The Data Protection and Digital Information (No 2) Bill was one of those ‘carried over’ from the previous session, but it’s now known simply as the Data Protection and Digital Information Bill (not to be confused with the previous abandoned Bill of that name). And…
WE HAVE A DATE: it’ll return to the Commons for report stage on Wednesday 29 November.
The background notes to the Speech say the quiet part out loud: DPDIB ‘has been co-designed with industry, for industry, in order to maximise the economic benefits’, with little thought for the impact on the public. No wonder our friends at ORG are dubbing it the ‘Data Grab Bill’: ‘your data will be used against you and you’ll have less ability to do anything about it’. We’ll continue to follow the Bill - check out our resources for more.
In addition… Chris Pounder has written about how DPDI No 2 Bill undermines transparency of Artificial Intelligence development and training … and a statutory instrument making some changes to data protection legislation (that we wrote about last time) will need to be agreed by parliament - let David Erdos be your guide.
Bills, bills, bills
Online Safety Act Ofcom has published several draft codes of practice … there’s a new Online Safety Act Network led by Maeve Walsh and Professor Lorna Woods… the government published a ‘raft of voices’ in support of the Act.
Digital Markets, Competition and Consumers Bill It’s Commons report stage on 20 November before it heads to the Lords… and after weeks of speculation, the government is amending it - Politico’s take is ‘Rishi Sunak performs delicate balancing act in Big Tech lobbying battle’, caving in to some but not all of big tech’s asks.
Other Another Bill - the Investigatory Powers (Amendment) Bill - will have consequences for data … there was criticism of the lack of AI regulation in the King’s Speech from the Science, Innovation and Technology select committee and others … and campaigners also think it missed the mark on cyber reform … DSIT published a summary of the science and tech announcements (with some unfortunate typos), which also includes a legal framework on machine learning and (oddly not mentioned in that link) an Automated Vehicles Bill.
AI got ‘rithm
Apparently there was some international AI Safety Summit the other week.
Connected by Data worked with ORG and the TUC to coordinate a huge open letter of more than 100 signatories calling for the voices of communities, civil society and workers to be included. Covered initially by the FT, it was great to see the letter - and its arguments, made by many across civil society and other sectors - across lots of Summit media coverage.
Our other big initiative was the People’s Panel on AI. This brought together 11 representative members of the public to attend, observe and discuss key events at the AI Fringe, and they ended the week by coming up with seven recommendations.
We’ve got more links on the Summit than you could shake a Large Language Model at in a special section (AI-nnex?) at the end of this newsletter. That covers various scene-setting pieces, all the events we could find on film or transcript, official documents, and reactions, further coverage and much more besides.
So we’ll just concentrate on the key announcements up here.
BEFORE THE SUMMIT:
- New £100 million fund to capitalise on AI’s game-changing potential in life sciences and healthcare (from the PM’s speech on 26 October)
- Britain to be made AI match-fit with £118 million skills package
- Sunak to launch AI chatbot for Britons to pay taxes and access pensions (Telegraph)
- Summit attendees.
WEDNESDAY:
-
The Bletchley Declaration: this agreement between the nations attending the Summit began by recognising many of the existing uses and harms of AI before focusing on the frontier - the agenda would focus on ‘identifying AI safety risks of shared concern’ and ‘building respective risk-based policies across our countries to ensure safety in light of such risks’, committing to ‘support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration’ (see also: preview, press release, No 10 summary)
-
Technology Secretary announces investment boost making British AI supercomputing 30 times more powerful: new computers in Bristol (Isambard-AI) and Cambridge (Dawn) will constitute the government’s AI Research Resource
-
UK unites with global partners to accelerate development using AI: ‘with Canada, the Bill and Melinda Gates Foundation, the USA and partners in Africa, the UK is helping to fund a £80 million ($100 million) boost in AI programming to combat inequality and boost prosperity on the continent’.
THURSDAY:
-
World leaders, top AI companies set out plan for safety testing of frontier as first global AI Safety Summit concludes: this heralded a plan on AI Safety testing, a ‘State of the Science’ report led by Yoshua Bengio, and future AI Safety Summits in six months (Korea, virtual) and a year (France, in person)
-
Prime Minister launches new AI Safety Institute: this confirmed the evolution of the Frontier AI Taskforce into the Institute, and agreements with the US AI Safety Institute (announced earlier in the week) and the Government of Singapore. The government published a more extensive overview of the Institute, too, and the current taskforce published its second progress report.
FROM THE US:
-
President Biden issued a pretty comprehensive Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (factsheet summary, AI.gov is looking for talent, summary of various AI announcements, welcomed by one of those involved with the AI Bill of Rights)
-
Vice President Kamala Harris gave a major speech in London (video, transcript) - Politico’s take was ‘Existential to who?’ US VP Kamala Harris urges focus on near-term AI risks, noting that ‘Harris’ remarks echo similar concerns from civil society groups’
-
There’s also a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy (the US, the UK and others have signed).
Now let us never speak of it again.
(Until South Korea’s virtual summit in six months’ time. And the French one in a year. And AI all day every day until the end of time. Maybe the killer robots would be welcome after all.)
NON-SUMMIT AI The LSE noted children weren’t mentioned at the summit, but have published AI and children’s rights: a guide to the transnational guidance … though the education secretary tweeted about how DfE is using AI ‘to reform education for the better’… DfE updated its generative AI guidance to include privacy and intellectual property concerns… ‘Most of our friends use AI in schoolwork’ (BBC Young Reporters)… and not everyone is impressed with £2m going to Oak National Academy on AI…
‘We need protection’: How AI, algorithms and oppressive digital tech are pushing workers to the brink (Big Issue, featuring Jeni)… Here’s what we know about generative AI’s impact on white-collar work (FT)… Hollywood actors’ union Sag-Aftra agrees tentative deal to end four-month strike (BBC News)… Andreessen Horowitz is warning that billions of dollars in AI investments could be worth a lot less if companies developing the technology are forced to pay for the copyrighted data that makes it work (Insider)… RSF and 16 partners unveil Paris Charter on AI and Journalism (Reporters Without Borders)… Like horses laid off by the car: BT tech chief’s AI job losses analogy draws anger (The Guardian), original comments made to Raconteur …
AI fake nudes are booming. It’s ruining real teens’ lives (Washington Post)… AI: Fears hundreds of children globally used in naked images (BBC News)… Adobe is selling fake AI images of the war in Israel-Gaza (Crikey - which is the name of the publication as well as the right reaction)… The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis (Wired)…
Euractiv has seen a series of obligationsfor foundation models and General Purpose AI drafted by the Spanish presidency of the EU Council of Ministers…
Anthropic, Google, Microsoft and OpenAI announce Executive Director of the Frontier Model Forum and over $10 million for a new AI Safety Fund… OpenAI’s six-member board will decide ‘when we’ve attained AGI’ (Venture Beat)… Musk says his new AI chatbot has ‘a little humour’ (BBC News)… As someone who has worked for years in both “open” and “closed” AI companies, operationalising ethical AI, I’m dismayed by battle lines being drawn between “open” and “closed” (Margaret Mitchell)… Pre-print on how AI ‘safety’ became mainstream, ‘Building the epistemic community of AI safety’ …
Why I am glad artificial intelligence “hallucinates” (Prospect)… AI is at an inflection point, Fei-Fei Li says (MIT Technology Review)… AI outperforms conventional weather forecasting methods for first time (FT)… How we’re building a community of AI enthusiasts in Defra (Defra Digital)…
And… AI named word of the year by Collins Dictionary (BBC News)… Cambridge opted for ‘hallucinate’ … while Viz has its own take on how to tame AI.
DSIT up and take notice
In a previous life, this was me trying to chart all the ministerial changes as a government reshuffle unfolded. I am therefore very grateful to be able to focus on only the data/tech changes… At DSIT: George Freeman stepped down as science minister, replaced by Andrew Griffith (MP since 2019, ex-Sky, former head of Boris Johnson’s policy unit); while Paul Scully was sacked as tech and digital economy minister (El Bow, if you’re wondering), replaced by Saqib Bhatti (MP since 2019, accountant by background, former vice chair of the Tory Party). (Not forgetting that a few weeks ago, the department’s parliamentary private secretary - a junior aide - was sacked over his stance on the Israel-Hamas conflict.)
Elsewhere, John Glen became the 13th Minister for the Cabinet Office (responsible for data in government) in 13 years - just as a review of civil service governance and accountability from one of his predecessors, Francis Maude (2010-15, a whole FIVE YEARS), was published. It includes recommendations on the better use of data, and that the Government Digital Service and Central Digital and Data Office be ‘unified as a single team’. It came shortly after the most senior civil servant in the department and COO for the civil service, Alex Chisholm, announced he’d be stepping down next year.
The new Home Secretary, James Cleverly, had taken an interest in AI while at the Foreign Office - including discussing the dangers.
AS for what DSIT has actually been up to… secretary of state Michelle Donelan is in the US, where engagements included a speech to the Family Online Safety Institute and a trip to the National Institute of Standards and Technology … Politico have done a profile of her … while she’s one of many ministers with disappearing WhatsApp messages …
George Freeman hosted a reception for the Secretary General of ASEAN, Dr Kao Kim Hourn, celebrating ASEAN-UK cooperation on science … ‘Philanthropic partnership unlocks £32 million for the future of best-in-class UK Biobank’, though the Observer reported they shared data with insurance companies (UK Biobank has called the report ‘highly misleading’)… there was a quantum showcase and agreements with the Netherlands and Australia … there’s a big job going at DSIT, as Director of Data Policy, which you have until close of play this Sunday to apply for… the department published July minutes from the Geospatial Commission as it announced some webinars on the National Underground Asset Register … published research on perceptions of digital subjects and careers … and the Government Office for Technology Transfer has funded the National Physical Laboratory’s development of an innovative new thermal imaging technology for foot ulcers. (You really do get everything with this newsletter.)
Deputy PM, Oliver Dowden, also gave an interview to the Times where he mentioned AI could reduce ministers’ workload, and Viscount Camrose appeared at the FT’s AI summit (another one!).
Parly-vous data?
There were DSIT questions in the Commons yesterday, while some of those down for DCMS today ask about AI. There’s also a written ministerial statement coming on ‘Online Safety Act - Super-Complaints Consultation’. As mentioned above, there was also a Commons statement and debate on the AI Safety Summit. The Commons Science, Innovation and Technology committee’s inquiry into AI governance continued, with Matt Clifford and senior DSIT official Emran Mian giving evidence - and the government’s response to the committee’s interim report is overdue. PACAC also took evidence on the use of, well, evidence last week.
In the Lords, one of the King’s Speech debates focused on science and tech, introduced by Viscount Camrose (Tim Clement-Jones on DPDIB is particularly worth a read - Labour’s Maggie Jones didn’t mention it). There was also a short debate on Artificial Intelligence: Regulation, and the Lords Communications and Digital Committee’s inquiry into LLMs continues, publishing written evidence and taking oral evidence, including this week from Meta and Microsoft. And there’ll be a question on EdTech in the chamber on 23 November.
Labour movement
On top of Peter Kyle’s contributions to the debate around the AI Summit (see section below, as if you could have missed it)… there were stories about Labour vowing to force firms developing powerful AI to meet requirements and mulling a ‘robot tax’ to penalise firms replacing staff with AI, as shadow Lords minister Steve Bassam told the House that any Labour government would ‘act swiftly’ on AI regulation.
Labour also announced a ‘Regulatory Innovation Office’ to hold regulators accountable for delays on decisions about new products and services, noting the Medicines and Healthcare products Regulatory Agency had a backlog of 966 clinical trial applications earlier this year which could lead to patients missing out on new life-saving medical treatments.
And might Labour free the Postcode Address File? (Background here.)
In brief
-
HEALTH Palantir to be named as winner of Federated Data Platform (Digital Health News), as Palantir’s Peter Thiel is taking a break from democracy (The Atlantic)… Algorithms are deciding who gets organ transplants. Are their decisions fair? (FT)… The app that promised an NHS ‘revolution’ then went down in flames (The Times on Babylon Health)… and that Observer article about UK Biobank sharing data with insurers, above, rejected by the organisation … while data quality concerns are among those flagged by the NAO in a new report, Reforming adult social care in England
-
GOVERNMENT The government has published a roadmap for transforming data and digital in government to 2025 (summary)… the ONS chief data officer tells Public Technology ‘We want a much more connected relationship with citizens around how their data is used’ (summary), as the deputy national statistician talks about linked data … DBT has launched a Smart Data challenge with Challenge Works, the ODI and Smart Data Foundry
-
EUROPE A couple of interesting rulings from the Court of Justice of the EU: that ‘a Member State may not subject a communication platform provider established in another Member State to general and abstract obligations’ and on taxing big tech
-
ICO Last time out, we noted that the ICO had found former NatWest boss Alison Rose had breached Nigel Farage’s privacy. The ICO has now apologised to Dame Alison, ‘for suggesting that we had made a finding that she breached the UK GDPR in respect of Mr Farage when we had not investigated her.’
-
SECURITY The NCSC has warned of enduring and significant threat to UK’s critical infrastructure including AI and elections, while the Irish Council for Civil Liberties has a new report on ‘Europe’s hidden security crisis: How data about European defence personnel and political leaders flows to foreign states and non-state actors’
-
POLICE Police urged to double use of facial recognition software - The Guardian… Government creating ‘worrying vacuum’ around surveillance camera safeguards - Tech Monitor (the Commissioner is concerned)… British police testing women for abortion drugs, and requesting data from menstrual tracking apps - Tortoise… Major UK retailers urged to quit ‘authoritarian’ police facial recognition strategy - BBW in The Observer
What we’ve been up to
In addition to our Summit activity - open letter, People’s Panel, AI and Society Forum workshop and a whole load of other media appearances and events - we ran a Connected Conversation on ‘Robust norms: creating and enforcing new data governance defaults at scale?’ and wrote up a previous one on Community negotiation on data rights.
We’ve got a Connected Conversation on Powerful Actions on Open Government, Data & AI (building on our workshop at the Open Government Partnership Summit) coming up, as well as a Design Lab on Resources for Effective & Inclusive Public Deliberation on Data & AI Governance.
What everyone else has been up to
-
Carly Kind will be stepping down as director of the Ada Lovelace Institute in February 2024 (you can now apply for the job)
-
Demos published a report on ‘Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation’
-
And Ellen Judson is stepping down as head of CASM at Demos to become a senior fellow - you can now apply to be CASM director
-
Full Fact looked at the misrepresentation of data and impact of data gaps across at least three government departments
-
The Bennett Institute (Cambridge) is looking for a Research Associate for its AI & Geopolitics Project
-
The APPG on the Future of Work, supported by IFOW, ran an event ‘From the Summit to the Road Ahead’
-
I hosted the 47th edition of the IfG’s Data Bites, taking in everything from LLMs to Field of Dreams
-
This year’s ODI Summit was themed around ‘Data Changes’, aiming to ‘put data at the heart of the global conversation about AI’
Events
-
techUK’s Digital Ethics Summit 2023, subtitled ‘Seizing the moment’, takes place on 6 December (including a session on our People’s Panel)
-
On 21 November, there’s a University of Oxford event, ‘Is the existential threat of AI overhyped?’
-
On 27 November, the IEA discusses, ‘Tech Turmoil: Does the Digital Markets Bill threaten Britain’s economy?’
-
The Minderoo Centre have an event on 30 November, ‘Is ‘artificial’ intelligent? Understanding human intelligence in the AI age’
-
If you’re interested in data in government, then sign up for UKGovCamp on 20 January - an inspiring unconference for all things public sector digital
Good reads
- Inside a Six-Month Espionage Campaign at Facebook - an exclusive excerpt from Wall Street Journal reporter Jeff Horwitz’s new book ‘Broken Code’ (Rolling Stone)
- AI edition of the New Yorker, featuring Why the Godfather of A.I. Fears What He’s Built, A Coder Considers the Waning Days of the Craft, Does A.I. Lead Police to Ignore Contradictory Evidence?, What the Doomsayers Get Wrong About Deepfakes, Holly Herndon’s Infinite Art, Sheila Heti on the Fluidity of the A.I. “Self” and Christoph Niemann’s “Create Your Own Cover with Till-E”
- Everyone got duped by Sam Bankman-Fried’s big gamble (BBC News)
- CRUISE KNEW ITS SELF-DRIVING CARS HAD PROBLEMS RECOGNIZING CHILDREN — AND KEPT THEM ON THE STREETS (The Intercept), and G.M.’s Cruise Moved Fast in the Driverless Race. It Got Ugly (New York Times)
And finally: Excel recruitment time bomb makes top trainee doctors ‘unappointable’ (The Register)
AI-NNEX
This should keep you going from AI Christmas until actual Christmas… everything from the AI Safety Summit, Fringe, fringes of Fringe, that’s fit to print (and no doubt more).
SCENE SETTING
Pieces before the Summit tended to focus on excluded voices, the frontier focus and geopolitical machinations…
AI policymaking must include business leaders (FT)… Rishi Sunak’s AI safety summit appears slick – but look closer and alarm bells start ringing (The Guardian)… Who’s in charge? Western capitals scramble to lead on AI (Politico)… Civil servants wanted to ban Israel from AI talks (The Times)… Key Players Remain Unconvinced About The Government’s AI Safety Summit (PoliticsHome)… Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK (TechCrunch)… How the UK’s emphasis on apocalyptic AI risk helps business (The Guardian)… Balancing China’s role in the UK’s AI agenda (Chatham House)… Nick Clegg compares AI clamour to ‘moral panic’ in 80s over video games (The Guardian)… MICHELLE DONELAN: Yes AI comes with significant risks – but it would be madness not to embrace its opportunities to make Britain better (The Sun)… How we‘re harnessing AI safely to improve people’s lives (PM on Twitter - more)… AI Minister Warns Regulation Must Move Faster Than Online Safety Bill (Camrose on PoliticsHome podcast)… Michelle Donelan, Peter Kyle and Palantir’s Alex Karp appeared on Sunday with Laura Kuenssberg (newslines from Palantir, some unimpressed with the questioning, as was - a week later - a former minister with the Sunak/Musk interview)…
#CONTENT
Let’s start with video, transcripts and official documents from the various events.
MONDAY
A Michelle Donelan speech kicked off the AI Fringe at the British Library (what a time to suffer a cyberattack)… followed by a scene setting discussion on AI and Society … discussions on navigating the AI hype cycle … preventing AI from perpetuating privilege and widening gaps … uncomfortable truths and trade-offs on inclusivity … fostering responsible AI research … keynotes from Dame Ottoline Leyser (responsible research) and Harriet Harman MP (designing for the margins)… The Citizens hosted The People’s AI Summit … and another Donelan speech at a Guildhall dinner to bring the day to a close…
TUESDAY We ran a workshop on an alternative agenda for Bletchley at the AI and Society Forum, which Defend Digital Me also ran an event at… the Existential Risk Observatory ran some pre-Summit talks (after another event a few weeks before)… at the main Fringe, there were discussions on defining AI safety … standards for responsible AI … evaluating AI … ‘why we need this conversation’ … keynotes on verifying content with Adobe’s Andy Parsons and defining AI safety with Ada’s Francine Bennett … fireside chats on evaluating AI and ‘why we need this conversation’ (with Nick Clegg)… and the Government published a summary of the four ‘road to the Summit’ roundtables with Royal Society, the British Academy, techUK, The Alan Turing Institute and other events with the Founders Forum, British Standards Institute and others (24 in total)…
WEDNESDAY The Safety Summit itself got underway, with the opening plenary starting with Michelle Donelan and finishing with King Charles… the PM did a podcast with Politico … IFOW held ‘Making the Future Work’, an AI Fringe Summit on the Future of Work… the MKAI Global AI Ethics and Safety - People’s Summit took place… back at the British Library there were discussions on AI and the creative industries … meaningful public involvement in AI governance … biotech AI … fireside chats on responsibly deploying AI for creatives and consumers and AI and climate … keynotes on DNA history from Ginkgo Bioworks’ Anna Marie Wagner and on consumers and competition from the CMA’s Marcus Bokkerink … the Summit closing plenary, with contributions from stage and audience, brought the day to a close… the Government published a summary of the day’s discussions … the PM appeared on Peston …
THURSDAY
Most of Thursday’s Bletchley action was behind closed doors, but the PM did a press conference and the UK published an overall summary, a summary of roundtables, summary of the ‘State of the Science’ report and summary of the discussion on safety testing … AI Fringe discussions on the future of work … bridging the AI skills gap … how to engineer responsible AI … elections, misinformation and democracy … fairness, bias and law … from the Digital Regulation Cooperation Forum … fireside chats on the future of work and open source in AI … a keynote on democratic institutions and processes from UNESCO’s Gabriela Ramos … and the day ended with Rishi Sunak interviewing Elon Musk and overshadowing everything the Summit somehow managed to achieve…
FRIDAY
Panels wrapping up the AI Fringe with Matt Clifford, Dame Angela McLean and Peter Kyle (Kyle also speaking to the Telegraph, AI would have saved my mother from lung cancer, says Labour MP)… officials from the UK and US and Lawyers Hub’s Linda Bonyo … and Summit attendees Francine Bennett (Ada) and Lila Ibrahim (Google DeepMind) … and our People’s Panel gave their verdict on the week.
REACTIONS
Ian Hogarth (Frontier AI Taskforce) and Matt Clifford (PM’s Summit representative) tweeted threads (Clifford was also profiled by Politico and did a thing for No 10)…
A joint statement from the civil society organisations at the Summit … from the UK’s national academies … AI Now published some remarks their executive director, Amba Kak, made at the Summit… the Blavatnik School’s Ciaran Martin wrote about ‘Optimists, doomers and securo-pragmatists: Reflections on the UK’s AI safety summit’… from Marietje Schaake … Conversation on AI should not just be driven by the technology but a vision of the society we want to live in, writes the British Academy’s Hetan Shah… a thread from Kanjun Qiu of Imbue AI …
Michelle Donelan made a statement in parliament, which prompted a short debate … the chair of the Science, Innovation and Technology committee called for near-term AI governance challenges to be addressed with the same urgency and unity as ‘frontier’ risks…
The FT welcomed the move towards the mundane: ‘In AI, focus on technocrats not terminators’… Sam Coates (Sky News) on the Sunak/Musk ‘interview’ … AI Safety Summit Lauded As “Success” But MPs Question What’s Next (PoliticsHome)… The Hypocrisy of the AI Summit: The UK is Doing Nothing About Political Misinformation (David Puttnam for Byline Times)… The AI Debate Is Happening in a Cocoon (The Atlantic)… U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress (TIME)… Rishi Sunak has reason to consider his AI summit a success – though voters aren’t likely to notice (IfG)… Sunak the influencer: How the UK’s AI summit surprised the skeptics (Politico)… The week when AI and geopolitics collided - a nuanced summary from Digital Bridge (Politico)… AI is not the problem, prime minister – the corporations that control it are (The Observer)… AI as Co-Pilot? Why work and workers deserve better (IFOW)… World Powers Say They Want to Contain AI. They’re Also Racing to Advance It (Wired)… ‘It’s not clear we can control it’ what they said at the Bletchley Park AI summit (The Guardian)… UK AI summit is a ‘photo opportunity’ not an open debate, critics say (New Scientist)… What Can the UK Teach the World About AI Safety? (Byline Times)… US and China join global leaders to lay out need for AI rulemaking (Politico)… Rishi Sunak’s AI plan has no teeth – and once again, big tech is ready to exploit that (The Guardian)…
Jeni threaded thoughts on marking homework, climate and safety, and who was excluded - and an overall assessment of the Summit and how the AI Safety Institute might play out … she also appeared on the Evening Standard podcast … We’ve had dangerous AI with us for decades, and never had a summit for racist algorithms (Susannah Copson, BBW)… Joint Statement on AI Safety and Openness (Mozilla)… another (via the Daily Mail)… UC Berkeley Center for Human-Compatible AI published a statement from attendees of their ‘International Dialogue on AI Safety’…
POLLING
AI-pocalypse? No. (applause, Labour Together)… “Move slow and fix things”: Britons concerned by current pace of AI deployment (Luminate)… Public Supports AI Safety Summit But Thinks Big Tech Has Too Much Influence (PoliticsHome)… 83% of Brits demand companies prove AI systems are safe before release (AI Safety Communications Centre)… while Ipsos spoke to some experts for their report, Debating Responsible AI: The UK Expert View …
AND FINALLY
The less said about this the better, FCDO… while the FT reported from AI’s Human Safety Summit.