RightsCon 2022

Jeni Tennison -

Both Jeni and Tim attended RightsCon, which is the world’s leading summit on human rights in the digital age.

It was a full-on event, with a week’s worth of panels and discussions and often four or five sessions to choose from at any one time. We barely scratched the surface, but here are our reflections from the sessions we did manage to attend, in particular focusing on how they relate to our work at CONNECTED BY DATA.


The sessions we attended were:

Decolonizing co-design: Global South perspectives

Jeni attended this session.

This session looked at the concept of design thinking and co-design; how it had arisen in Scandinavia, as a mechanism by which workers / employees could become involved in design in an industrial setting; and how it has and is being adapted for use within the social sector, and outside the Global North.

I was interested in it because co-design as a method lives somewhere near the top of Arnstein’s Ladder of Citizen Participation and the IAP2 Spectrum of Public Participation. It should be a methodology that helps organisations and communities to design data governance processes together, for example. At the same time, we know a lot of those most affected by data governance decisions are going to be minoritised in one way or another, so decolonising the process – making it as approachable as possible by the range of participants we want to be included and specifically challenging Global North assumptions – is going to be important.

A lot of the discussion centred around the panellists’ experiences facilitating co-design sessions. I was struck by specific practical tips such as:

  • always having two facilitators, at least one of which is from the community that you’re co-designing with
  • using an ice-breaker that involves people sharing happy or loving memories about common experiences (eg favourite foods) with strangers, to focus on positive feelings of common humanity
  • not using the word “solutions” because it carries what can be a crushing expectation of finality, but also because co-design should be more focused on exploring the problem space than finding solutions
  • viewing facilitators as servants to the participants, rather than as their guides

The session also highlighted some areas where there’s a need for particular cultural awareness and adaptation:

  • low capacity contexts in terms of electricity availability, internet connectivity, and access to tools like Miro
  • phrases like “getting outside your comfort zone”, when participants might never experience being inside a comfort zone
  • degrees of bias, prejudice or even conflict between participants that may come from different groups and communities
  • inclusiveness in terms of language(s) being used
  • visa discrimination when sessions are held physically

The organisers of the panel, Innovation for Change, also shared a new “Spellbook” for co-design, InnoMojo, which looks useful for co-design efforts around data governance.

The state of personal data protection in Africa: a comparative approach

Jeni attended this session.

This was an interactive session focused on people in Africa sharing their experience and perspectives of personal data protection laws across the continent. One way to track this is to look at which countries have ratified the Africa Union’s Malabo Convention on Cyber Security and Personal Data Protection.

I went along to understand better the current state of data protection law across Africa, and to see whether there were any approaches that incorporated the more collective and participatory approaches to data governance that we’re advocating for.

Most of the session focused on familiar challenges:

  • lack of ratification of the convention (no law means no rights)
  • if there is a law, lack of citizen awareness of those digital and data rights
  • lack of effective enforcement, due to weak or missing regulators

One panellist, speaking about the experience in Ghana, talked about how data is abstract and the concept of “privacy” isn’t something that’s familiar to their way of thinking. When I asked about this, one of the participants described how even the origin and framing of “human rights” is shaped by American and European thinking on what rights look like. Unfortunately the session ended before we could explore this in more detail.

Driving corporate action towards responsible and ethical artificial intelligence

Jeni attended this session.

This session was focused on the World Benchmarking Alliance’s Collective Impact Coalition for Digital Inclusion and insights from their Digital Inclusion Benchmark 2021. The World Benchmarking Alliance is all about improving corporate behaviours towards the Sustainable Development Goals, and the Digital Inclusion Benchmark looked specifically at corporate commitment and action around digital inclusion.

I went to this session to better understand how to drive corporate behaviour specifically towards collective and participatory data governance, as this is an important (I think necessary) approach for producing more responsible and ethical AI.

The headline figures from that report are that only 20 of the 150 companies they looked at have a commitment to ethical AI principles; even those that do commit to those principles don’t explicitly reference human rights; and only fifteen have processes in place to assess human rights risks posed by AI. Most of the conversation focused on getting companies to commit to a set of AI principles as a first step towards more responsible and ethical approaches overall. (Personally, I think that a more bottom-up approach of including community members in the design process could be a more impactful first step.)

It was particularly interesting having some investors in the panel, as they discussed their need for visibility on the risks and liabilities surrounding the human rights implications of AI, up and down the value chain.

One of the investor panellists did highlight the importance of stakeholder engagement as part of AI development processes. The report says:

**3.2.3 Engaging with affected and potentially affected stakeholders (CSI 6) **

Engaging with affected and potentially affected stakeholders is a critical part of a company’s approach to respecting human rights. This indicator looks at two criteria: a) The company discloses the categories of stakeholders whose human rights have been or may be affected by its activities; and b) the company provides at least two examples of its engagement with stakeholders (or their legitimate representatives or multi-stakeholder initiatives) whose human rights have been or may be affected by its activities in the last two years.

Only five companies (Acer, Amazon, Apple, Microsoft and NEC) met both criteria, while 117 met neither. Apple is particularly notable in this regard, having conducted interviews with 57,000 supply chain workers in 2020. Apple also solicited feedback from almost 200,000 workers in 135 supply facilities in China, India, Ireland, UK, U.S., and Vietnam resulting in over 3,000 actions to address the workers’ concerns. Additionally, the company is investigating the use of new digital labour rights tools featuring data analytics to increase engagement with stakeholders.

(I’ll note that in this example, the communities affected by the use of AI and technology aren’t exactly the same as workers within the supply chain, although obviously they are important too.)

When I asked about good practices, however, the panellists talked about having few good examples to point to; and a lack of clear good practices. Apparently there were five companies within the 150 that had an AI oversight board, but these tended to be technocratic exercises built around technical expertise (in law, ethics and human rights) rather than being made up of or incorporating lay members from affected communities.

Inclusivity and expertise in content policy development: identifying the “stakeholder” in stakeholder engagement

Jeni attended this session.

This section was led by Meta, discussing their approach to stakeholder engagement around content policies and community standards.

I went to this session because there’s an obvious link between stakeholder engagement around things like content policies and stakeholder engagement around data governance. Many of the same issues about identifying who stakeholders are, ensuring there’s inclusivity and diversity, and creating long term trusted relationships with stakeholder groups, all apply in both cases.

Observations that I found particularly insightful during the panel included:

  • the fact that there tends to be a lot more discussion about gender and race inclusivity than there is about disability inclusivity
  • that some stakeholders may be affected directly, some indirectly, and some by unintended consequences, so the net needs to be cast quite widely
  • that some stakeholders may require anonymity to be able to engage (for example because identifying themselves as part of a minoritised group might put them in danger)
  • that some stakeholders might not be able to engage because of capacity limitations (eg electricity, internet connectivity as touched on earlier) but others might not be willing to engage because they don’t trust the organisation or the process – and there needs to be proactive outreach to those stakeholders too
  • that building trust with stakeholders requires continuous engagement and long-term relationship building, not just one off activities
  • that there isn’t any one-size-fits-all policy for a nation-spanning organisation like Meta – different countries may well need different approaches – so the engagement and the outputs need to reflect this diversity
  • that similarly within any one community – such as the disabled community – there are a wide range of different needs and concerns

I asked a question about what Meta does when what stakeholders ask for are things that are impractical, costly or go against Meta’s business model or financial interests. The Meta representative talked about trying to build long term relationships with institutions, organisations and individuals and being honest that there are going to be limits. He framed it more in terms of not being able to satisfy everyone because stakeholders are generally coming with opposed legitimate concerns (some being focused on safety, others being focused on voice) than about responding to a consistent consensual message from stakeholders, which is more of what I had in mind.

This did make me think that the “stakeholder engagement” process they were talking about is more of a hub-and-spoke model, with all stakeholders engaging with Meta, rather than stakeholders interacting with each other (eg in a co-design process) and Meta responding to them. It also seemed to be more of a model where the “stakeholders” were advocacy groups, rather than individual Facebook users and non-users, who might be in a better position to compromise.

Putting the “good” in health data as a public good?

Jeni spoke at this session, so there is a separate write-up from it.

Are users the secret weapon to fix social media platforms?

Tim attended this session.

This panel session brought together campaigners involved in mobilising users to exert influence on large social media platforms, including organisers from the Facebook Users Union, Kairos and SumofUs.

I went to this session because I was interested in thinking about the interaction between ‘outsider’ pressure for platforms to change how they handle data, with potential ‘insider’ structures for participatory governance.

The organisers presenting have adopted a range of strategies, from developing technical facebook ad-boycotting tools, to building mass movements of hundreds of thousands of users, pledging to log-out from platforms for a set period of time. Discussions also pointed to People vs BigTech in the EU, with the ‘People’s Declaration’ which calls on existing politicians to provide scrutiny of technology, but errs towards individualised solutions of giving users increased data controls, and enhancing transparency measures, backed by increased enforcement powers.

While these large citizen movements may have had a degree of participatory process in the development of their demands, it was notable that the resulting demands don’t appear to include calls for embedding participatory governance, nor do they target or reference existing governance developments like the Facebook Oversight board.

Centering the Global South in developing tools to advance the UNESCO Recommendation on Ethics in AI

Tim attended this session.

This session, hosted by UNESCO, Research ICT Africa and the Data for Development Global Research Hub introduced the UNESCO Recommendations on the Ethics of AI, and upcoming work to develop a Responsible AI Index.

I attended to explore how far a collective governance lens might also be applied in the AI context, as well as to learn more about some of the wider context of our collaboration with Research ICT Africa, and the Responsible AI Index which is building on some of the team and learning from the Global Data Barometer.

The session opened with a poll, asking whether participants felt countries in the global south have an equal share in setting the agenda for AI. There was a resounding 97% answer of ‘no’, with one participant calling for us to examine the different sorts of power imbalance at play, including ​​political power, financial power, labour power, material power, knowledge production power, social power and moral power.

The presentation of the UNESCO guideline on ethics of AI approved last year described both the process of developing the guidelines (consultative through UNESCO regional hubs, plus negotiation with member states), and the structure of the resulting guideline, built around four values, ten principles, and even policy proposals, including one related to data policy.

Participation is a theme in the guideline at a number of points, including under the heading of “Multi-stakeholder and adaptive governance and collaboration”:

Participation of different stakeholders throughout the AI system life cycle is necessary for inclusive approaches to AI governance, enabling the benefits to be shared by all, and to contribute to sustainable development. Stakeholders include but are not limited to governments, intergovernmental organizations, the technical community, civil society, researchers and academia, media, education, policy-makers, private sector companies, human rights institutions and equality bodies, anti-discrimination monitoring bodies, and groups for youth and children. The adoption of open standards and interoperability to facilitate collaboration should be in place. Measures should be adopted to take into account shifts in technologies, the emergence of new groups of stakeholders, and to allow for meaningful participation by marginalized groups, communities and individuals and, where relevant, in the case of Indigenous Peoples, respect for the self-governance of their data. AI Recommendation §47.

In a breakout session looking at the kinds of programmes or initiatives that should be prioritised to centre the knowledge, skills and experiences from the Global South in global discussions on ethical AI I briefly raised the importance of collective and participatory models, and of making sure that companies headquartered in the Global North have to consider the voice of users in all the territories or jurisdictions where their products or services may be used, rather than being able to satisfy any policy requirements for participatory engagement by working with people from their home market alone.

I also raised the point, in response to comments on the need to build widespread AI literacy, that we need to explore when literacy is a requirement, and when we should allow citizens or governance institutions to focus on the outcomes they want to see, and should put the burden on technologists to establish whether their AI solutions really deliver the kinds of goods that communities want.

One other takeaway from this session was that there may be opportunities to engage with the development of the Responsible AI Index to see if it can track where participatory AI governance is taking place.

Path independent: forging new models of tech infrastructure through community participation

Tim attended this session.

This session brought together a number of different organisations working on researching the social impacts of technology independent of big technology firms: with a particular focus on the needs of engineers in these industry independent projects to have stronger opportunities for networking and mutual exchange.

I attended in part, because I’ve long been inspired by the critical and open approach to research taken by one of the co-organisers, CATLab, and because of the opportunity to start thinking about the role of tech workers in creating change around how data is governed: a theme we’ve been exploring in our Theory of Change.

I was struck by the diversity of ways panellists and participants were approaching the question of ‘new models of tech infrastructure’, from CATLab’s work on co-creating research studies on social media platforms, to SITU Research’s architectural studio approach to designing artefacts from digital traces to inform human rights reporting and action, and Open Technology Fund’s investment in building tech infrastructure and supporting tech development through it’s ‘Labs’. As a non-engineer I found it a little hard to trace the common threads between the presentations, and my sense was there were lots of different notions of what ‘community participation’ meant at play. However in the break-out discussions I found I gained some useful insights into the challenge that technologists face moving from large firms that provide a scaffolding for professional development, into the independent technology world where structures for mentorship and peer-learning are harder to come by.

Discovering and deploying digital public goods as tools for human rights

Jeni attended this session.

This session was about the Digital Public Goods Alliance, which is focused on enhancing the use of and investment in digital public goods – in particular open source software, but also open data and other open assets – in support of the Sustainable Development Goals. The initiative defines a standard for digital public goods and has created a registry of assets that meet that standard. The idea is that this should help organisations working towards the SDGs to identify and use suitable tools, and help investors ensure they’re not repeatedly funding lots of the same kind of thing, and are funding the maintenance of the tools that already exist.

I went to this session in part because of my general interest in open resources and because data is sometimes a digital public good, and if it is, it’ll need governance.

The focus of the conversation was very much on the kinds of useful digital tools that are being developed and have been registered. I was particularly taken by the description of Primero – an open source case management tool for children’s services – and the way they had thought through different tiers of service support for different users. But that’s really because I’m obsessed with business models that provide sustainability for public goods without undermining their openness.

I was a bit disappointed by how little engagement and participation was included within the Digital Public Goods standard, although I was pointed to a currently open issue on incorporating some kind of guidance on governance into the standard. It’s pretty easy to put together a piece of software that doesn’t meet user needs (because you go with your gut about how it should work rather than asking the people you want to use it), that causes unintentional harms (because you never consulted with the people who would be harmed by it, who would have spotted obvious flaws), and that ends up neglected and un-maintained (because you never built any governance around it). It seems to me that proactive engagement with stakeholders can help avoid lots of these bear pits.

Still, if you are creating digital public goods and you want to get funding from international funders, it does look as though it would be worth trying to get yourself on the registry!

Who writes the rules? Centering the excluded majority in tech policy

Jeni attended this session.

This session was about the exclusion of racialised and marginalised women (particularly Black women) from tech policy, building on the Who Writes The Rules open letter which includes:

The #BrusselsSoWhite campaign highlighted that there is a persistent and exclusion of racialised people from European decision and policymaking processes – even concerning policies where these groups are most likely to experience harm. More recently, Politico’s analysis on employees within the Commission confirmed the Commission is overwhelmingly male and white. Furthermore, the systemic harms from many social media platforms — censorship, hate speech, disinformation, radicalisation and algorithmic injustice means that racialised and marginalised digital citizens experience two forms of exclusion: 1) from writing the rules of their tech platform experience and, 2) from the involvement in the literal writing of regulatory tech policy rules by states that govern them.

The participants noted that it’s not just policymakers in Brussels who are almost exclusively white, but also leaders of think tanks and funding organisations in this space.

I attended this session because giving marginalised voices a say in data governance is at the heart of our work at CONNECTED BY DATA. I have become much more aware over the last few years that being anti-racist is not just about not being racist, but actively advancing racial justice, and promoting and making space for Black voices.

One of the contributions that I found most useful was Temi Lasade-Anderson’s exploration of what good engagement might look like, for example around policies around content moderation. She advocated for taking an intersectional approach and putting primary attention on the experiences of Black women at micro, meso and macro levels:

  • micro level: examining hate speech as particular experienced by Black women and solutions that specifically improve their experience (rather than being “race-neutral”)
  • meso level: situating hate speech within the larger experience of social media for Black women, including hyper-visible Black women
  • macro level: understanding how this fits within the wider historical and socio-economic context, how social media exacerbates long-standing inequalities, and the ideologies underlying those tools

Other contributions that I found thought-provoking were:

  • the observation that the EU’s Digital Services Act pays special attention to gender-based inequalities and risks, but not racial ones (as if racism isn’t a problem in Europe) and not intersectional risks
  • consideration of sex workers and sex-positive women as a marginalised group
  • the championship of joyful Black tech, rather than always focusing on negative aspects of technology
  • a challenge to view “participants” in consultations as experts (of their own experience) and equals in the process
  • as in the previous session, a critique of how international human rights law was developed and how it has been applied (lacking an intersectional lens)

The challenge I took most to heart was to look internally, not just externally, in our work. We are currently an organisation of three white people and it will be easy for us to not acknowledge racial aspects of our work or if we do, work in tokenistic ways rather than in true partnership. We have to not only be reflective and aware, but also (as the panellists discussed) practice an ethics of care.

Power to the people: participatory data stewardship in practice

Jeni and Tim both attended this session.

This session was run by the Ada Lovelace Institute to explore the practicalities of participatory data stewardship, in part building on their report on the topic. We went along because this is really the essence of what Connected by Data is trying to achieve.

The Ada Lovelace Institute structured the session in a really novel and engaging way: we had to imagine that we were members of a climate data cooperative, faced with some requests for access to the data we have been collecting and stewarding, and needing to work out how to respond.

We chose to discuss an access request made by a public/private partnership looking at developing an AI solution for a smart grid, and through the process explored a range of factors that were important to us in making the decision (such as providing evidence that a smart grid would reduce energy usage), and aspects of the process that we wanted to improve (such as being able to have a dialogue with the prospective reuser of the cooperative’s data in making the decision, and being involved in an ongoing basis with the project).

The session helped to surface the distinction between cases when a co-operative or trust model may be legally bound, or required by its principles, to prioritise on it’s members interests, and cases where collective data governance calls for consideration of the interests, and representation of the voices, of non-members who are nevertheless affected by a data governance decision.

One takeaway for us more broadly was that it highlighted how much work there still is to do around the detailed practice of day-to-day collective and participatory data governance, but also how quickly we could get to something pretty concrete through this kind of grounded exercise.

The future of empowering data stewardship

Jeni and Tim both attended this session.

This session was run by the Open Data Institute and focused on their work on bottom-up data institutions, particularly questioning what makes these kinds of models “empowering”.

As a small group, we stayed in plenary, and started by sharing personal experiences of moments when we have felt empowered. Stories shared pointed to the importance of both having the opportunity to grow into a role and feeling supported on that journey, and the importance of clear signals that power is really being transferred, such as giving a group control of specific budgets or resources. Put another way, these are times when a role as decision-maker or shaper is presented as a right, not as something that can only be taken up after passing some test, or ‘earning’ it through obtaining some qualification.

As we returned to a focus on data, the session took a philosophical turn, exploring the relational nature of all data, and, drawing on discussions of Indigenous Data Sovereignty, exploring the question of ‘jurisdiction’ in data governance. In other words, before some collective data governance can be applied, there are questions to settle of which institution has reasonable claim to govern the data: whether an indigenous community it is about, a wider polity it covers, or a thematic community working with the data.

As our last session of a long RightsCon, neither of us captured notes good enough to be able to write up a clear takeaway point: but we left with a sense again of the value of the collective lens, and always asking the question: are those affected by this data involved in its governance?

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more