People's Panel on AI Bulletin 2

Tim Davies

Tim Davies

This is the second People’s Panel on AI Bulletin. You can find earlier bulletins here. In this update, reflections from our first half-day on:

  • Building public trust: communication campaigns, or open conversation?
  • Making sense of AI: early questions from the panel
  • Dealing with data: questions on privacy protection and model training.

But first, a quick summary of the day.

This was the first day of the panel, and this morning panel members travelled into London from across England, bringing different questions, experiences and insights on AI from everyday life.

Map showing where participants have come from. Participants were selected from across England

After kicking off our discussions with introductions, and by looking at the different kinds of AI application that resonate with panel members, we headed to the British Library for our first fringe session, hearing from Nick Clegg of Meta and Madhumita Murgia from the Financial Times, as well as Sir Nigel Shadbolt and Chloe Smith MP on a panel on the need for the current conversation on AI Safety. We rounded off our day with an hour of reflection, supported by input from Abeba Birhane of the Mozilla Foundation.

“I’ve learnt a lot just this afternoon. I’ve still got a lot more questions, but I didn’t expect to have so much to say already.”

Panel member

Friday presentation: Registration for our Friday in-person panel closes at 12 tomorrow. If you have not yet expressed an interest in joining us in person at 13:30 at Friends House, Euston on Friday 3rd November, and you would like to come along, please complete the form or get in touch with tim@connectedbydata.org.

Panel insights - Day 1

A question of trust?

In their summary of official Pre-Summit engagement, the government report concerns that ‘public scepticism and fears’ are hindering AI adoption. Yet, the official engagement involved few (if any) direct discussions with members of the public to explore the particular trust issues that AI might face, and seems to suggest the solution might be in communication campaigns about benefits of AI.

As the panel discussed some of the existing ways AI affects our lives, from social media and smart-speakers, to facial recognition and food delivery, the discussions pointed to how:

  • Learning more about particular technologies might lead to trusting some more, and trusting others less. Building trust will not come from a communications campaign, but needs an open conversation.

  • Discussing the different rules that that might be applied to existing and established AI technologies can provide guidance for regulating ‘frontier’ technologies. For example, panellists questioned whether we might want to change defaults around control and use of smart-speaker recordings, and how social media algorithms should be optimised. Drawing on lived experience of existing AI tools can allow people to more clearly explore the values and priorities that future AI governance should serve.

Working out how to build trust may well call for more time and focus on conversation, rather than on public education or communication campaigns.

Making sense of AI

Does AI have common sense? Can AI systems be ‘taught’ to behave in common sense ways? What is common sense?

What about the kind of ‘sixth sense’ that humans might use, judging body language and all sorts of other subtle signals in, for example, a job interview?

Understanding the kinds of ‘sense’ that people value in the everyday may offer clues to the roles (and regulations) they want to see prescribed for AI tools.

Dealing with our data

Discussions of the data underlying AI models was a common thread through today.

Hopes:

  • Data will be stored safely;
  • Personal data privacy will be respected;

Concerns:

  • The data fed into AI models will perpetuate inequality
  • Not enough time is spent making datasets fair

Learning:

  • Model collapse could threaten future development of large language models

More discussions on data to come.

What’s next?

Tomorrow, members of the Panel will be talking with scientists at the Hopes and Fears lab, as well as attending sessions on public voice at the Fringe, and spending time in hands-on exploration of generative AI tools.

Look out for more updates tomorrow on panel hopes and fears for AI, and on the questions they think the Fringe and Summit should really be asking.

Updates

If you’d like to be kept up to date with the People’s Panel on AI or join the panel’s presentation on Friday 3rd November, please register your interest here.

About the People’s Panel

The People’s Panel on AI brings together 12 representative members of the public randomly selected by the Sortition Foundation to attend, observe and discuss key events at the AI Fringe, which is being held alongside the UK Government’s AI Safety Summit at the beginning of November 2023.

Through a deliberative process facilitated by Hopkins Van Mil, the panel is working towards a public report giving their verdict on AI and their recommendations to government, industry, civil society and academia for further action.

The People’s Panel on AI is being organised by Connected by Data with support from the Mozilla Foundation, the Accelerate Programme for Scientific Discovery, the Kavli Centre for Ethics, Science, and the Public, and the Ada Lovelace Institute.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more