This is my first week back after a couple of weeks of much-needed holiday in Devon. It’s been great to see so much that has been moved forward in my absence, and I’ve particularly enjoyed seeing how Emily has taken to weeknoting (putting the rest of us to shame).
Opinionated and constructive
First some reflections on our positioning. We had a useful session at our off-site looking at the span of organisations we have interacted with, and those we thought were missing from that map. In classic workshop fashion, the stakeholders we included were arranged against two axes: how aligned they were with us, and the degree to which they are insiders or outsiders.
Plotting different organisations, and ourselves, against the insider/outside axis led to some interesting discussions about what it means:
- tone: outsiders are more direct in their communication than insiders
- connections: insiders are directly connected with policymakers and practitioners
- radicalism: insiders work within existing power structures, while outsiders tend to think they need to be replaced
- critical / constructive: outsiders tend to focus on pointing out what’s wrong with the status quo, while insiders focus on what should be done to improve it
We also discussed how positions on this axis are relative: that an organisation in the middle will seem radical to those on the inside, and to be part of the establishment to those on the fringes.
When I set up Connected by Data, I wanted to provide a more outsider-y voice than (say) the Open Data Institute or Ada Lovelace Institute, to be a stronger and spikier voice for collective, participatory data governance, as I felt this campaigning voice was missing from the civil society ecosystem. I wanted to create an organisation unafraid to say that both technocratic and individualistic decision making about data and AI are wrong, and to call it out when we see it. I still want that.
But I’m also frustrated by the academic articles, conference discussions and think pieces that are strong on identifying the power imbalances in the status quo, and always conclude with a call for greater public participation in data and AI governance, but never quite get to describing what that looks like or how it could be achieved. Because it’s not an easy thing to get right, and will seldom be perfect, and holding participation up as a silver bullet allows for it to be easily dismissed when it doesn’t meet that unrealistically high bar.
So I also want us to be constructive and realistic as well as radical and opinionated.
That feels like a difficult tension to hold, but something the wonderful Anasuya Sungupta shared during a Shuttleworth Fellows call this week has given me a new perspective on holding tensions like these. That’s the concept of polarity thinking, which recognises that two polarities, seemingly at odds, both have value as well as shortcomings and risks, and that instead of trying to forge a middle way, we have to move between them. It encourages us to get good at detecting when to be more opinionated and spiky, and when more collegiate and constructive, and to move between them like breathing in and out.
It’s kind of blown my mind and I’m looking forward to exploring these ideas more with the team at our next off-site.
Effective data governance
It was great to read Asaf Lubin’s paper on Collective data rights and their possible abuse, which is an effective reality check and warning around a concept that is central to our work, much like Johannes Himmelreich’s Against “Democratizing AI” and Abeba Birhane et al’s paper Power to the People? Opportunities and Challenges for Participatory AI.
These make for great summaries of the arguments for (and limits of) the first three of the characteristics I’ve listed for what makes for effective governance (collective, democratic, participatory). And they serve to emphasise the importance of the final two: deliberative and powerful.
The importance of deliberation was highlighted in this Twitter exchange as one of the things that distinguishes a citizen’s assembly from a focus group or yet another public attitudes survey:
If focus groups were demographically representative, given 4-5 days, trained in deliberation, given access to deep info, called and examined expert witnesses, debated, reflected and shifted their position, and came to a consensus, sure. In other words, nothing like a focus group.
— Max Rashbrooke (@MaxRashbrooke) July 17, 2023
This highlighted an important measure of effective deliberation: how many participants change their minds?
The Metaverse Community Forum, which Tim has been looking at this week, has also highlighted the importance of power, and the adoption of the results of such deliberations. As with the OpenAI programme on democratic inputs to AI, having these exercises being run by the organisations who would have to adopt their results is a crucial step. Far better than yet another deliberation initiated by a third party think tank or academic group, which can provide evidence but faces multiple barriers before it can change practice.
But then, even if deliberations initiated by those who need to act on the results are indeed acted on, there will always be questions about how they influenced the methodology: how participants were chosen, what questions were asked, which witnesses were called. It sometimes feels like it’s impossible to carry out a participatory exercise that doesn’t get accused of participation washing for one reason or another.
I think, as with experimental methods, we just have to accept that participatory exercises are all flawed and treat the results with the weight those flaws can handle.
Things I’ve been reading
- Dr Émilie Torres’s summary of her book on human extinction, which gives a really useful framework for thinking about existential risk and our approach to it. I think it’s great but have some criticisms about how it leaves out consideration of what we leave behind.
- Emily Bender’s piece on the “schism” between proponents of AI ethics and AI safety, which she argues is a distinction between people who actually know what they’re talking about and (not coincidentally privileged, white, male) fantasists. The capture of thinking about AI in US colleges described by Nitasha Tiku reminds me of the damage caused by the influence of the Chicago School of Economics on economic policy. As the UK’s Global AI Summit is now explicitly framed around AI Safety, it’s all a bit frustrating.
- Bianca Wylie’s thread and article on the Canadian approach to AI regulation, which highlights how various countries are positioning themselves geopolitically, and the shortfalls in all of them.
What I’m working on
I’m looking forward to the summer lull and using it to finish things off (not least a report comparing food regulation with data regulation which has been gathering dust for a year now!).
But perhaps it’s not going to be that quiet… We just kicked off some work with AWO looking at the current state of regulation for collective impacts of data and AI (the kinds of harms that aren’t covered by data protection). And a piece of work with Just Treatment aiming to amplify patient voice around the collection and use of data in the health system.
I’m also aware we have to pick up steam on our fundraising. The Shuttleworth funding, which has paid for me and been a very comfortable source of unrestricted funding to support all our work, runs out at the end of February, and we need to identify some sources to replace it. Suggestions welcome!
Finally, this week coming I’ll be off to New York to meet up with other members of the Aspen-convened Council for Fair Data Future, which I’m looking forward to (because nerding out about data) and dreading (because lots of in person people time that drains me) in roughly equal measure.