On 14th May 2024, I was a panellist at an expert panel and discussion on the Future of Courts, jointly sponsored by The Nuffield Foundation and Legal Education Foundation in collaboration with UCL Faculty of Laws.
The courts in England & Wales and the USA are witnessing the most significant shift in their approach to the delivery of justice in over a century. The Lord Chancellor and senior judiciary have recently published a Joint Vision for the future of civil and family courts and tribunals. The invitation-only event was organized to focus on the opportunities and challenges that technology poses for courts on both sides of the Atlantic.
These were my remarks:
My focus is on the impacts of things like the use of data and AI, or adoption of digital services, in particular on people, communities and society.
It’s useful to focus on people and their experience not only from a moral perspective but also from an instrumental one: it’s people who choose to adopt technology, or try to avoid or subvert it. If you create a system that doesn’t meet their needs, they won’t use it. If you ask them to share data, but they think that data is going to be misused or used in ways that will be biased against them, they’ll avoid telling you, or lie, or avoid the service altogether. We’ve seen this a lot in health, where for example concerns about the General Practice Data for Planning and Research (GPDPR) programme led to a wave of opt-outs from the use of health data for research and planning.
Trust is essential for the adoption of technologies. Levels of trust are not equal across different groups; in general, people who are marginalised in society are the ones who are (probably rightly) most distrustful and therefore most likely to resist. If resisting means avoiding a service, and that service is important to them, like the justice system, then you’re further disadvantaging them. In the justice system this might reduce access to justice.
There are lots of different kinds of people involved in the justice system, playing different roles. We can put them loosely into three buckets:
- There are the people who are involved in court proceedings themselves, including litigants, defendants, witnesses, victims and so on.
- There are the people running the court system, from in the courts themselves like court officers, lawyers, judges, to policymakers, to developers of lawtech systems.
- There are people who provide accountability around this system, so people like journalists, or court observers, academia, or civil society groups who really care about access to justice.
There is no one public of one set of attitudes or interests. There are a wide variety of people within these buckets, who will have very different attitudes and needs, because of the kind of role they play, the power and privilege they enjoy, and their own personal experiences, values and preferences.
Questions and challenges
Taking this standpoint of thinking about the impacts on people, communities and society, I’m going to pose some questions and challenges around the transformation of the justice system through technology that I think need consideration, based on experience in other sectors.
First, how will this change the experience of people accessing and using the justice system? Will they feel confident and supported? Will it feel fair? Two challenges to think about:
-
Public attitudes research is pretty consistent in highlighting that people are distrustful of how the private sector uses their personal data. That’s partly about worrying it’s going to be used to advertise to them, affect insurance premiums and so on, and there’s also a bit of feeling of exploitation, that it’s not fair for others to profit off personal data. So, how is increased use of private sector suppliers of legal services going to be received, and what impacts might that have?
-
With many automated systems, people will feel that the result isn’t fair because it didn’t take into account the particular context that they’re operating in. Data by its nature flattens and categorises people, and isn’t good at capturing complex contexts and nuance. Despite all we know about humans also obviously being biased and not making good decisions, people may feel more seen and understood by another person than by a digital system, which would impact on whether they feel that decisions are fair and justice has been done.
Second, how are different groups of people in society affected differently by these changes? Are they changes that promote equality, or are they going to make existing inequalities even worse? Again, there are two challenges to consider:
-
Often the people who need most help and support accessing services, like justice, are precisely the people who for a range of reasons won’t be able to or want to use digital services. So introducing technology just makes life even easier for the people it’s already pretty easier for, rather than those who are marginalised or vulnerable. Overall, technologies tend to increase inequalities.
-
Specificically when we’re thinking about AI, we know that learning from the past tends to embed the biases of the past, so if you already have people who are systemically let down by the justice system, there’s a risk of perpetuating that through the adoption of technologies based on machine learning.
Third, how does this change the experience of people working in and around the courts system? Are the changes likely to make more room for “good work” – providing mastery, autonomy and purpose, and fulfilling interpersonal connections to people they care about – or are they changes that are likely to dehumanise and make workers feel that they are just doing the things a computer tells them to do? In many cases where case management tools are put into place, the people managing those cases become managed by the tools, particularly when they get tied to productivity and performance measures. Making sure work within the justice system continues to be good, satisfying, work, is important.
Finally, how will the justice system be legible and trustworthy to the outside world? This is a consideration that is particularly important to the justice sector, where you have a principle of open justice – that justice has to be done and seen to be done. Building in openness to the kinds of digital tools that are being put into place, enabling and supporting observation and reporting, as well as more formal monitoring, for example about equality impacts, will be essential to preserve this important principle.
Approaches and solutions
With these questions and challenges, I’m not implying that technological transformation shouldn’t happen. If you know about them and take them into consideration, there are ways of tackling these risks in the development of policy, guidelines, standards and AI systems themselves.
The fundamental approach to identify, mitigate and address these risks – and to build trust – is to orient around and listen to the people who are going to be affected by them. There’s three ways to do that:
-
ex-ante, before designing and while you are developing policy, guidelines, standards and data/AI systems, you can prevent risks and create buy-in by including the people who are going to be affected by them in shaping the vision; in impact assessment processes; and even in getting them to provide data and information that determines how the system works
-
ex-post, after deploying these things, to enhance your ability to act quickly to correct things when they go wrong, you can build formal and informal governance and accountability processes: have complaint mechanisms; enable people to ask for explanations or request audits; and provide mechanisms for them to get redress when they are harmed
-
transparency – to support both those ex-ante and ex-post measures and avoid people plugging gaps in knowledge with their imagination about the nefarious things you might be up to, you should be being as open as possible about everything; you should empower media, academia and civil society to act as independent voices talking about what you’re doing, that might be more trusted
Final thoughts
It’s easy for digital transformations to be carried away by what technology can do rather than focusing on what matters to us as people, communities and societies. By default, technology tends to embed inequalities rather than addressing them, so we have to work especially hard to counter that. Actively listening to and bringing in the voice of the multiple different people and communities who are affected by these changes will lead to better outcomes for everyone.