With the UK Government’s Artificial Intelligence White Paper out for consultation until 21 June, the Ada Lovelace Institute convened a roundtable of various stakeholders under Chatham House rules to discuss responses to the landmark document. NB: This brief write up is from the perspective of Adam Cantwell-Corn. Any omissions or errors are his alone.
The focus of the session was on the ‘contestability and redress’ principle in the white paper, and on the ‘central function’ that is intended to support the sectoral regulators as they discharge a new set of responsibilities.
The first prompt for the group centred on what effective means of address for AI related harms are currently available. Some participants raised the point, and concern, that the white paper appears to exist in a contextual vacuum, without acknowledging that there are already major failures in redress mechanisms for general harms or complaints systems; with the backlog and underperforming nature of the justice system as a whole, ombudsman’s schemes and the practical and legal limitations of key regulators like the ICO. In this context, the white paper establishes a new set of responsibilities for regulators to consider, but doesn’t offer new substantive powers or clarity over resourcing of already stretched institutions.
With this in mind, there was some discussion regarding the value of separating out AI-related harms from other harms. There are of course unique qualities and the need for adaptation, but that careful thought was needed if creating a separate regime for AI harms that could unhelpfully distinguish them from established forms of accountability and redress. For example, the importance of focusing on the outcomes and impact of the technology, and the people or institutions that deploy them - as opposed to getting bogged down in whether AIs have accountability in and of themselves.
In the vein of “data policy is AI policy”, the group drew a firm link to the Data Protection and Digital Information Bill’s and AI policy: the Bill, currently in the commons committee stage, weaken statutory rights and protections around many aspects that the white paper’s non-statutory stated intentions seek to uphold. This includes the effective exercise of contestability and seeking of redress, including over automated decision making.
With the effective exercise of rights in mind, the group discussed both how the broader public interest may be considered beyond identified individuals and the exercise of collective rights, e.g through representative mechanisms. With class-action style lawsuits facing legal barriers, the established mechanism of ‘super complaints’ against the police or against companies by consumer bodies is potentially a good analogy. In the employment context, the German and Austrian practice of ‘works councils’ was cited, where worker-representatives are legally empowered, including with expert advice, to shape technological developments in the workplace.
This led to a discussion about the role of civil society in the regulatory ecosystem. While wanting to see an empowered role, there was caution to not accept a burden shifting of responsibility for identifying and addressing harms from the state or deployers of technology to civil society. In this context, there was significant discussion about the potential and limitations of an AI Ombudsman and how this would fit within the distributed regulatory set up envisioned by the white paper.
Finally, the white paper sets out intentions to measurably improve public trust. There was scepticism about this, as trust is a slippery and manipulable concept. Points were made about a greater focus for looking at public views on the outcomes and impacts of. A point was made that trust will heavily depend on the quality of the information and insight available to people and indeed, the more transparency there is the lower the trust may be, at least initially.
Rightly, given the need to respond to the consultation and the diversity of stakeholders present, the session’s focus was on the contestability and redress of AI tools, rather than a broader look at overall purpose and direction of the tools themselves.
For another occasion, perhaps there is also the need to articulate a positive vision for the AI transformation? So that this space is not only occupied by a narrow and technocratic logic of innovation, which could limit civil society’s role to shaving off the sharp edges.