Tim spoke as part of a panel for UNICEF UK’s in-house Digital Week on the Risk and Benefits of AI.
The discussion touched on three key areas. My tidied up speaking notes in response to the prepared questions are below. This might vary a little from the remarks I delivered, but give a sense of the territory covered.
Is AI the next evolution of Digital? If so, how should a children’s rights charity respond?
If you’ll forgive me I’m going to start by taking a bit of a personal look back at past evolutions of digital before getting to AI.
As I was preparing for this talk I was reflecting on some of my early encounters with both emerging technologies, and, as it happens, with the UN Convention on the Rights of the Child. As a 17-year old I was a member of the then Children and Young People’s Unit Youth Advisory Board to then Minister for Children and Young People, John Denham.
We were invited to be observers for the periodic review of the UK in front of the Committee on the Rights of the Child, and spend a week in Geneva both attending committee sessions, and meeting groups across the city, including the ILO, UNICEF and others. I was armed with the pocket sized digital camera that my youth service in Havant Borough had found budget for a few weeks before, and was capturing short photos and 30 second videos (all the camera memory could accommodate) and sharing them in proto-blog posts on the website Taking It Global - a platform for young people from across the world to share projects, actions and activities.
In the early 2000s, the web had only been with us for a decade, and we were excited about the potential of this technology to connect us - with peers at home, and young people working on advocacy for children’s rights across the globe.
Fast forward a few years, and I was working at the National Youth Agency, developing a research programme on Youth Work and Social Networking: in the era of MySpace, Bebo and early Facebook - where we were exploring both how youth workers could make positive use of emerging social media, but also how they should take account of the impact it was having on young people’s lives: recognising that to meet young people where they are, we need to critically understand the information and media landscape they operate in.
From there I turned to the field of open data: not least exploring how far standards and infrastructures for sharing data could streamline access to information on positive activities for young people with a project called Plings: where we constantly navigated the tension between just harvesting the snippets of information we could from leaflets and websites about when football clubs or scout groups were open, with going direct to the sources and providing tools and incentives for group leads to provide up-to-date and accurate information about their activities and clubs, and their capacity for new members to get involved.
At that point - my own journey moved off into work on open government less than youth services - though in my current role at Connected by Data I draw heavily on a commitment to participatory practice rooted in my own experience of projects based on Article 12 of the UNCRC. However, I start with these reflections about different waves of digital for a couple of reasons.
The early web, social media, open data - and the opportunities and challenges they bring - are all still with us. AI is not so much an evolution that transcends them - but is a layer on top - that interacts with them. And the versions of AI we have with us now are predominantly shaped not by the kind of bottom-up experimentation and distributed logic that drove the early web, but by centralised platforms and silicon valley capital: and we must have our eye open to this. I’d argue that ChatGPT has not been disruptive because of its underlying technology, but because of its interface: a chat box that makes us more forgiving of the limitations of generative AI models - which produce plausible, but not factual, texts and images. The generative AI wave also has hallmarks of a bubble: Daren Acemoglu’s recent paper on the macroeconomics of AI offers a compelling challenge to claims of the contribution it could make to growth and productivity - yet governments, corporations and consultancies are selling big claims of its impact.
The question put to us was: Is this the next evolution of digital? and how should children’s rights organisations respond? Within AI are some evolutions of digital - and many that also embed certain ongoing evolutions of capitalism - but it’s not the whole story.
Regardless, organisations do need to respond to AI: and for the how, the watchwords for me, that we might come back to, are with bounded experimentation, with transparency and accountability, and with a commitment to an inclusive and participatory approach.
What advantages are there for you personally and your work from the emergence of new applications of AI? How could we harness these?
I mentioned earlier the idea of bounded experiments with AI - finding opportunities to test and benchmark what current tools can and can’t bring. The main place I’ve been doing that in my own work is in preparing for, and writing up, participatory public engagement sessions.
Last year we ran a kind of citizens jury alongside the AI Safety Summit, which we called the People’s Panel on AI, and this year we’ve been running an ongoing panel of members of the public to input into Responsible AI UK’s Public Voices on AI project.
Generative AI tools have been useful there in a couple of contexts. Firstly, the members of the public we worked with have asked a couple of times for notes or summaries of presentations or panels we’ve asked them to observe and engage with. In the past, if I’d not arranged for a note-taking in advance, this would have been a tricky request to meet. But - we found that within 10 minutes, we could go from a YouTube video of a recorded panel, to a transcript, to a reasonable aide memoir summary, in about 15 minutes. I could have said less than 5 minutes - as that’s how long the technology took - but we found we often had about 10 minutes work to do tidying up errors in the transcription of names and details - recognising that whilst speech recognition rarely struggled to capture Johns or Junes, it struggled to consistently label Ahmed or Aditya: errors which, if not captured upstream led to misattribution of points or at worse, erasure from the record of their contributions.
The second use we’ve been making of generative AI is to help us write-up sessions - again recorded and machine transcribed. We’ve run a couple of blind tests comparing a human written summary to an AI summary - and found the AI summary lacks nuance - and we lose out on important opportunity to engage with the text more slowly. However, after we’ve written our first pass pulling out themes and quotes from the transcripts, feeding those transcripts into ChatGPT and having an interactive conversation to see if it suggests areas we’ve missed, or pulls out different verbatim quotes from the text on given themes, can be a useful way of checking our biases.
So - whilst lots of AI tools are built at the moment to offer you a ‘first draft’, I think harnessing them as part of developing the second draft is much more interesting and valuable.
I’ve also been thinking about the AI tools we might not use, but others could be drawing on. After experimenting with getting NotebookLM to generate a talking heads podcast from a recent report on global deliberation on AI (a copy here if you want to listen!), I’ve been thinking about how anything we now write might be read via AI-driven summarisation. In the case of our Global Citizen Deliberation on AI report, it appears that, essentially by accident, it was structured in a way that led to a very effective podcast summary. But I’ve tried other reports which are not handled so well. Without twisting our content in some kind of AI-SEO practice, we may need to think about how our content production both shapes AI models overall, and how well it communicates when mediated by common AI models.
What are the greatest risks of AI and how should we, as an ethical organisation, take account of these in how we work internally and how we work with others?
I want to focus on two areas in particular. Firstly, responsible data practices. Using an AI system often means combining a pre-trained model, with data that your organisation has collected or holds. When that data involves children and young people, there are extra considerations to take into account. The Responsible Data for Children principles and guidance are an excellent starting point for thinking about this. We need to recognise that AI tools are data processing tools, and call for transparent, accountable and participatory data practices.
Secondly, and to build on the topic of AI bias, I think we need to address the potential biases in AI models when it comes to the representation of children and young people. There has been work on ecolinguistic bias in AI models, trained on historical content that does not represent the ecological future we need to create, but I wonder if similar challenges may not apply when it comes to alignment of AI models with children’s rights, and the lived experience of children and young people. Models trained on content from the open web are likely to underrepresent content from a child or young person’s perspective. Being aware of these specific biases may be particularly important for a children’s right organisation.
In the discussion session we also explored questions of governance, and how far charities should be ‘early adopters’ or whether they should play another role in the changing technology landscape.
We’ve been thinking a lot about the different perspectives we can take on AI: from a workers perspective, an environmental perspective, a bias perspective, an inclusion perspective and so-on. Effective governance involves not leaving AI to be the domain of one board member or team, but accepting that everyone has a relevant perspective on the AI elephant, and making sure everyone feels empowered to ask questions and bring their distinct perspectives to the table.
When it comes to the role of charities in engaging with emerging digital tools - I think we need to focus in particular on the collaborative advantage. Individual organisations may have neither the capacity, mandate nor power to substantially shape technologies - but together, we can. So, for children’s charities, working with others to call for better AI models that are aligned with lived experience of children and young people may be important. Or it may be important to work together on getting tools implemented in ways that can implement responsible data for children practices. At Connected by Data, in the Data and AI CSO Network we’ve been exploring how civil society groups can be stronger together.
Coda
As this was for an internal workshop, these notes reflect only the inputs I offered to the session. However, I want to note the valuable learning I gained from my fellow panellists, and gratitude to the chair for curating a really interesting discussion.