The conference marked the launch of reports from two projects that the Data Governance Working Group commissioned and has been working on:
- Enabling data sharing for social benefit through data trusts
- Advancing research and practice on data justice
Both are relevant to our mission at CONNECTED BY DATA.
I’ve also been doing some research, applied for a big grant, and – of course – hiring!
Work on data justice aims to shift the emphasis of data ethics and governance onto issues of social justice, such as equity, fairness and confronting inequalities. The substantial and substantive work done by the Alan Turing Institute identifies participation as one of six pillars of data justice (alongside power, equity, access, identity and knowledge).
The Turing team have created a series of (drafts of) guides for policymakers, developers and affected communities. Stakeholder engagement features heavily in all three, as the primary route through which the voices of the people and communities affected by the collection and use of data can be heard. They will be a useful foundation and inspiration for our own work in this area.
Data trusts are one model of institutional intervention designed to provide a vehicle for either increasing or restricting access to data.
There has been a lot of theoretical work on data trusts (and other novel institutional forms such as cooperatives, unions and commons) over the last few years, not least through the data institutions programme at ODI. It’s a term that has a lot of traction, and a concept in which people place a lot of hope, as a way of empowering people and challenging the tech monopolies. At its heart, it requires individuals to participate in an organisation that stewards and shares data on their behalf.
However, there is less activity on the ground, and when the ODI and Aapti Institute reported the results of a survey of data institutions earlier in the year, they found no organisations that fulfilled all the functions of a “data trust” as we would define it at GPAI.
To explore this further, we at GPAI wanted to commission some work to understand how a “proper” data trust would work, and the barriers which are preventing them from arising. So, the ODI, Aapti Institute and Data Trusts Initiative went on to co-create designs for data trusts in three climate-change-focused domains:
- cycling in London
- smallholder farming in India
- indigenous climate migration in Peru
Digging into the details of what would be entailed in actually creating and running such a data trust surfaced a variety of issues, some to do with the enabling environment (such as legal foundations), some to do with practical barriers such as questions of financing, and some to do with basics like incentive structures and technological access and capability of prospective contributors.
Operating within these (realistic) constraints narrows the applicability of data trusts significantly and demonstrates why they are far from a panacea, even with policy-level changes.
I have a few research projects and collaborations brewing at the moment:
- A piece with Astha about what impact-focused data regulation might look like
- A piece with Jess and Ben at the newly launched Bennett Institute for Applied Data Science on patient attitudes to the use of health data and the limits of patient engagement as it’s currently practised
- A piece I’m writing largely under the banner of the Bennett Institute for Public Policy on what we can learn about the regulation of data from the regulation of food
The first two are in pretty early stages and I’m about halfway through the latter, hoping to get it into a reasonable state this week so that I can talk about it at the Bennett Institute for Public Policy event: Lessons from history for governing the digital future on 5th April. At a high level, the summary is:
- Data is like food because it’s pervasive, complex, diverse, and our choices around it affect not only ourselves but other people, communities, and the environment
- There are a number of themes from the regulation of food that we can learn from in the regulation of data and AI systems:
- Create a testing infrastructure of professional auditors and analysts
- Incentivise due diligence within the data ecosystem
- Require transparency about the risks of data and AI systems
- Set minimum standards and inspect for adherence
- Use behavioural nudges for societal and environmental benefits
- Changes in law and policy come about when there is both a crisis and the groundwork in place to respond rapidly to it
Doing this research has been really fun. I love learning about new domains, and I think some of the insights could be genuinely helpful in reimagining the way we regulate data.
On organisational matters, I spent a lot of time this week on the recruitment of the Advocacy and campaign director for Connected by data. I thought I’d document this in a bit of detail since I find it’s always helpful to see and learn from how organisations do this kind of thing.
Having filtered out people who were plainly not suited for the role based on their CV, I sent the remainder (around 20) a questionnaire to help me with the screening and sifting. I’ve made a template version in case you want to have a look.
The questionnaire asked applicants:
- to confirm they understood the limits of the role, such as its term, that it’s in a small organisation, that it’s part time, and the funding available for it
- to provide basic HR information such as their current salary and notice period (I also asked about diversity here)
- to self-assess against some key requirements for skills and experience for the role
- to describe their networks with journalists, politicians and their advisers
- to give me a sense of who they are:
- what three words they would use to describe themselves
- what strength they would want to exercise in the role
- what they would want to learn from it
- what organisational practice they would want to introduce
The final question about organisational practices produced some answers that were perhaps unsurprising:
- Purpose, strategy, goals and planning
- Having a clear sense of purpose
- Having a clear strategic plan
- Having clear goals
- Having a risk management plan
- Transparency and openness
- Communicating clearly and often
- Being transparent
- Working in the open
- Paying everyone fairly
- Being a learning organisation
- Being open to learning and to diversity
- Creating a culture where it’s safe to fail
- Collaborating in an enabling way
- Using the Reinventing Organisations advice process
- Looking after ourselves and each other
- Caring for each other
- Appreciating each other
- Playing games together
- Having a wellness fund
I particularly appreciated the pointer to the Reinventing Organisations wiki. I enjoyed the book a lot, and would love to put into practice many of its tenets.
Filtering down the responses was difficult, because of the range of good applicants, but I got it down to five, have given them a written task to do, and have interviews arranged with them next week.
I have also informed all the rejected applicants, and provided detailed feedback (including suggestions about how to improve their CVs and covering letters) to the half dozen who asked for it. Applying for jobs can be so soul destroying, I figured giving them something back was the least I could do.
I also asked for feedback on the process through an anonymous questionnaire. Only a couple have responded, but both very positively.
The other news on the jobs front is that I’ve decided to defer hiring into the Researcher post that I had been advertising. That’s because I’ve found someone great to work with me and take on that work. More soon on that, once all the contracts are in place.
I hadn’t intended to start fundraising yet, but I was pointed at the FTX Foundation Future Fund, which had a deadline last Monday, so spent much of last Sunday putting together an ambitious bid to them.
Doing this was helpful as a way of articulating what the longer term (5 year) goals and vision are for CONNECTED BY DATA, and if I will aim to write it up into something that’s sharable more widely, not least because it would be a good basis for other fundraising in the future.
At best, the Shuttleworth Foundation fellowship funding will only last for three years, and it isn’t at the level needed to make the kind of impact I want to make through this initiative, so I’m going to need to find other support from somewhere.
I had a great chat with Sam Gilbert earlier in the week, and he helpfully followed up with a discussion about how internal and external legitimacy might play into collective data governance. There’s more to follow up there. Edit: Actually what Sam pointed me to was input and output legitimacy, which is something different! Both are useful frameworks for thinking about this.
On Discord, Leigh, Libby, Sam, Tim and others have been musing about the scale of collective data stewardship, especially when the data that is being stewarded has relevance and applicability at multiple scales. For example, you might have energy data being stewarded at a very low level (a school), or at a local level (a local authority), or nationally, or even internationally. Lower level stewardship lends itself more readily to direct relationships with relevant communities, but come with both an organisational overhead (creating sustainable organisations is hard), and additional burdens on individuals (who might have to maintain relationships with multiple such low-level organisations). If you’ve got opinions on this, come join the chat!