Protecting and empowering communities in an era of artificial intelligence

Tim Davies

How can we protect and empower communities in an era of artificial intelligence – and how should the Green Party approach the governance of data and AI? These were the overarching questions posed by our executive director, Jeni Tennison, to Natalie Bennett (former leader of the Green Party and now a member of the House of Lords) and Andy Stirling (Professor of Science and Technology Policy at the Science Policy Research Unit at the University of Sussex) at the Green Party conference in Brighton.

Photo of Natalie, Andrew and Jeni sitting at the front of fringe meeting room

Andy opened by noting the ‘insidious’ nature of AI – it’s everywhere, and is now able to fool people. It may be like adding kerosene to the fire of issues such as inequality and power; there is a risk by ‘blackboxing’ what evidence and science can do as ‘AI’, we end up taking how power works for granted – instead, we should view science, technology and innovation as a democratic issue, not a technical one.

Natalie agreed that the debate is often dominated by ‘tech determinism’ which displaces politics and political choice (including around Universal Basic Income and the future of work). She related her experience of working on the Online Safety Bill in the Lords, meeting lots of their grandchildren virtually as they taught their grandparents how the tech worked – we have choices to make even as we get up to speed with the new tech. Her concerns about AI included workers not being replaced by robots, but instead being forced to act like them; copyright and intellectual property issues; and future generations of generative AI being trained on the outputs of previous ones, which are essentially bad first year essays.

Green values

What Green values should inform the party’s approach to AI? For Natalie, it should be centring human needs (including in the workplace and on creators’ rights) and allowing people to be people rather than being treated like robots. She noted the current Lords committee on AI in weapon systems, and concerns that the UK has not committed to ensuring a ‘human in the loop’. She also pointed towards the motion before Green conference (that was approved the following day) which noted that AI-generated content should be clearly identified as such. She also underlined the need for education to teach people how to investigate and promote critical thinking, rather than regurgitating things for exams and treating them as gospel.

Andy thought the Greens had a huge responsibility and opportunity around AI – very few social and political movements have such scope, uniquely bringing social justice and environmental movements together. Precaution, sustainability and democracy were among the important concepts. We should also remember that AI is complex and not one thing: questions on ‘will it save the world or is it the spawn of Beelzebub’ miss that it’s both. We should steer innovation – a ‘pro innovation discourse’ is totalitarian – and we could ban applications in some areas (such as autonomous drones in defence). Natalie agreed on the environmental impacts – around the Online Safety Bill, people talked about things as they happen in the cloud, rather than consuming a huge amount of electricity and materials, with mining and other impacts often being especially acute in the Global South. ‘Do we really need mart toasters?’ Just because we can do something doesn’t mean it has to be done, a concept that should be foundational.

Positivity and participation

Jeni asked about a positive vision for AI. Natalie highlighted medical scans. She noted that people often think we can take away dull, routine jobs – she had used a self-checkout earlier – but we should be wary: in the midst of a loneliness pandemic, cashiers play an important role. Connected by Data has found that surveillance of postal workers often stops them chatting to members of the community – Natalie noted that postal workers in France are being paid to do that, but it can change the nature of the interaction. ‘AI is basically big data which can do useful things’ but we should remember ‘there’s a much broader picture around the job than just the job’. Andy felt we were zombified too much as a society to simply accept tech, but people may be seeing the risks more clearly with AI. The big question is ‘is tech the tail wagging the social dog?’ – we should ensure tech serves society and democracy.

Following that, Jeni asked about how we should give power to communities through democratising data and AI governance. Andy noted that the devil is always in the detail (including who is there and what constraints they are operating under) – deliberative processes are always a dance with the instrumental use of power, so how can we dance in the most effective way? Public deliberation and engagement are not inherently good – they often aim at cranking out one answer or consensus, vulnerable to the work of the machine and forces of justification (participants often police themselves). Instead, such processes could come up with different things, understanding how different interests and value systems would play out, and find where the disagreements are. This could lead to multiple recommendations, informed by different values, and stimulate democracy rather than subverting it. Natalie noted the magical thinking around AI – that it is always separate and different – when we should understand that the same problems from other parts of politics also apply here. People often say ‘I can’t possibly understand that’ when faced with AI – but you can understand the issues and the outcomes without having to know the code.

Questions from the audience followed on subjects including:

  • The transparency of AI systems for non-technical people. Andy replied that few values are more important than transparency (along with accountability), but it is not an unmitigated good – what do you do when someone says ‘here’s 10,000 pages or lines of code for transparency’? Natalie underlined the importance of being able to ask questions – we need proper openness. Jeni noted that government will have worked hard to get their recommendations into multiple expert reports on different sectors – returning to Andy’s point, we should reject single recommendations in favour of recommendations under different values and interests and transparency around those.
  • The personal ownership of our data. Natalie felt that we should have some control over our data (such as that collected by our smartwatches) but was interested in Jeni’s argument against ‘ownership’ as a frame: it is essentially what we have now, but gives an illusion of ‘consent’ and ‘privacy’ – we need collective mechanisms that operate at a higher level and recognise that we are all connected by data.
  • Institutional reform. The questioner noted the controversy around Cambridge Analytica, and that the Electoral Commission had little power. Natalie noted that this has often been the case with elections.
  • Corporate power. The questioner said that to train AI models requires a huge amount of data, which only a few companies can do – what can a democracy do about that? Natalie used the analogy of the food industry, which also has a few dominant companies – this is an issue across our economy and is not unique to tech. Tech reflects society rather than creating it – a different kind of economy might give us a different kind of tech sector. Andy noted the ‘colonial workings of power’ at a global level and agreed this wasn’t just an AI issue – democracy needs to deal with corporate power and related issues. Complexity also should not paralyse us – people don’t need to understand tax codes in detail but are allowed to have opinions, so we need to know how values and how they drive expertise. We should address centres of power through democratic challenge, though it is easy to fragmented into fiefdoms by different areas of expertise.
  • Regulation of AI. This may be a pivotal moment, with election cycles and initiatives around the world. Andy said his general rule of thumb was if the EU was proposing something it was likely to be sensible. We need to be careful not to be boxed into being ‘Luddites’ (though they were much more nuanced than simply stopping technology), and it should be about steering things in the right direction rather than being for or against. Natalie thought ‘we’re going to take an advisory approach rather than legislating’ was up there with the worst phrases in politics, and thought the current UK government approach was too weak – clear rules are needed.
  • Controlling the parameters and taking our data back. Natalie agreed that ‘garbage in, garbage out’ is a problem. Andy wondered if we should want to get our data back, as that would individualise the problem when we may need a collective paradigm.
  • Copyright, and text and data mining. Natalie was going to be talking to UK Music later. Andy wasn’t entirely sure what he thought, but thought discussion should include open source – he thought academia should be open source, but that people are currently getting shafted in the arts and creative industries.

Asked for final thoughts, Andy said it was easy to feel demoralised and paralysed by scale, but we need a discussion based on hope rather than fear about how to struggle against entrenched power across the whole tech landscape. Natalie recounted a panel she had been on a few years earlier about digital privacy. Her message was not to talk only about digital privacy, but privacy – bring in other organisations, acknowledge you are part of a wider system (not just digital), bring systems thinking to issues like AI regulation, and join the dots on different social and political issues and AI.

Panelists

Natalie Bennett was leader of the Green Party of England and Wales from 2012-2016 and was appointed to the House of Lords in September 2019. She draws on her background as a journalist, agricultural scientist, and advocate for women’s issues and various progressive causes, including championing universal basic income and addressing food security and farming concerns within the party. Natalie has a Master’s in mass communication, with a thesis on the relationship between readers and texts on the internet, and has been actively engaged with issues of AI regulation.

Andy Stirling is Professor of Science and Technology Policy at the Science Policy Research Unit at the University of Sussex, where he works on issues of power, uncertainty and diversity in science and technology (especially around energy and biotech). Andy has served on a number of UK, EU and wider governmental advisory committees including as a lead author for the Intergovernmental Panel on Biodiversity and Ecosystem Services (IPBES).

Chair: Jeni Tennison is an expert in all things data, from technology, to governance, strategy, and public policy. She is the founder of Connected by data, a Shuttleworth Foundation Fellow and an Affiliated Researcher at the Bennett Institute for Public Policy. Jeni was CEO of the Open Data Institute, where she held leadership roles for nine years. There, she developed and directed the ODI’s approach to topics such as open data, data governance, data portability and data institutions, as well as leading research, development and advisory work in sectors ranging from health and climate to agriculture and engineering. Jeni is the co-chair of the Data Governance Working Group at the Global Partnership on AI, and sits on the Boards of Creative Commons, the Global Partnership for Sustainable Development Data and the Information Law and Policy Centre. She has a PhD in AI and an OBE for services to technology and open data.

We’ll were in Brighton, in Room 8 of the Conference from 17:15-18:30 on Saturday 7 October 2023.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more