Three members of the Connected by Data team (Maria, Helena and Tim) attended the Data Justice conference in Cardiff on 19th and 20th June. On the 20th we ran a workshop to trial our game-based activity for exploring data governance, and we engaged in sessions across the rest of the conference, capturing notes on a number of these as written up below.
Keynote: Experiences at the margins
With Mirca Madianou (Goldsmiths, University of London), Patrick Williams (Manchester Metropolitan University), and Hamid Khan (StopLAPDSpying Coalition).
Maria, Tim and Helen all attended this session.
Mirca Madianou presented a framework for discussing the power asymmetries behind tech uses, which can lead to technology for good becoming harmful: techno-colonialism. The framework acknowledges how science has been a prime tool to mould, control and manage colonial subjects, arguing that the subjugation of the colonised continues after postcolonial independence through the dominance of eurocentric knowledge, the codification of racial and social discriminations, and the pervasiveness of global capitalism.
Mirca outlined how the UNHCR’s increasing use of biometric registration has provided a case study on data and digital policies enhancing inequalities. The United Nations improperly collected and shared personal information from ethnic Rohingya refugees with Bangladesh, which shared it with Myanmar to verify people for possible repatriation. Some Rohingya refugees weren’t aware they could opt out of providing biometric data and had no idea their data was being shared further. One reportedly said they thought getting an iris scan was a test for eye disease. Rohingya refugees later protested and went on strike against smart ID cards issued in Bangladesh camps, with forceful response from the Bangladesh government.
The employment of technology in this scenario is backed by narratives that, according to Madianou, justify data processing through the logics of accountability (improvement), of audit and of securitization.
Patrick Williams, in his presentation, went deeper into the narratives created for social control: “the criminalised ‘other’ as an object of social control”. Drawing for testimonies and people’s experiences, he pointed to dangerous associations (such as being part of a “gang” as a primary signifier and racialised notions of criminalisation) and how data processing has been used to reinforce these patterns through the “seduction of technology”, as science and technology have assumed within society a certain authority based on claims of objectivity.
The extended quote below highlights this point:
“When you’re stopped when you’re not doing anything suspicious, and you’re stopped without the reason even being explained, that creates a really bad feeling. And you feel a lot of fear inside. You ask yourself, ‘Is there something wrong? Has someone said something about me? What is happening, exactly?’ A few hours afterwards, you’re still thinking about it, especially if they ask you for your name. You think, ‘Well, what are they doing with my name? They think I did something? Why did they ask for my name, specifically? Am I going to be stopped somewhere else after that?’” - Ashraf Hamalawi is a university lecturer who also serves as a police special constable in Worcestershire. He has been stopped and searched over 12 times mainly under the Terrorism Act, while visiting London.
Williams concluded with a call to not let ourselves be seduced: tech is not neutral and conversations around it should go beyond human rights, in light of the centrality of constructions of riskiness and protection of an imagined white european.
Hamid Khan started his talk by addressing the intuitive association between privacy and surveillance. He argued that privacy concerns are not the right frame for opposing surveillance: the latter needs to be understood as a methodology for stalking people with intent to cause harm.
Echoing the other participants’ concerns, Khan defends that data is political, as, historically, there seems to be an intention and purpose of categorising and documenting acts by certain communities as “criminal”. In light of that, he suggests some guiding principles to move the conversation: (1) injustices in data processing are not a moment in time but a continuation of history; (2) creation of the “other”; (3) de-sensationalize the rhetoric of national security; (4) anchored in human rights, aiming to dismantle this system rather than keep it limited to constitutional or civil rights and civil liberties. By recognising that these discussions go beyond a specific moment in time, he acknowledges “data abolition” as a multi-generation journey, reminding us that “where there is oppression, we have to build a culture of resistance”.
Papers: Governing publics with data
Maria attended this session.
This session revolved around data processing for public services purposes within the digital welfare state. By targeting specific publics for effective and focused public policies, these initiatives, more often than not, put those groups and communities at risk of being negatively impacted by data governance decision making. Some key examples emerging from the papers included that:
- In an effort to unlock the value of municipal data, Denmark started to collect citizens’ data in the 1920’s (marital status, gender, occupation, religious beliefs), but continues to face the critical challenge of “using data to create a reality they can control rather than understanding the reality they are in”.
- Colombia’s System of Possible Beneficiaries of Social Programs (Sisbén) is a targeting tool composed by surveys and crossing of administrative data. It is “exclusionary by design”, with difficulties to access the scoring procedure, a narrative of it being highly technical and no participation methods on its creation and in what constitutes “poor” in the context of Colombian society. The case poses the question of “who is in charge of correcting mistakes or inconsistencies?” and challenges the ruling of “the more the data the better”.
- The temporal governance involved in data processing for social security policies negatively impacts beneficiaries subject to automated credit scoring. It imposes a temporality to these individuals’ financial existence within their community.
Workshop: From the personal to the collective: Bringing context to reimagine impact assessments
Tim attended this workshop session, run by Jacklyn Sawyer of Columbia University and Gulsen Guler of the Civic Software Foundation, exploring how we might expand the meaning of impact in technology impact assessments, and better bring in expertise from different domains to create contextually aware impact assessment processes.
It started from a recognition that legislation and regulation are increasingly calling for independent tech impact assessments to be carried out, but these are often free-form, without clear methodologies yet established in a burgeoning field.
Jacklyn and Gulsen presented the group with an example impact assessment template, developed from a synthesis of a number of common frameworks, that quickly revealed the limitations of pro-forma approaches that lack an appreciation of the different contexts that technologies might be designed and deployed within.
Through an interactive discussion, the workshop explored questions around the events that should trigger technology impact assessments, and the need for models that can capture unintended, indirect or delayed consequences of technology adoption.
Papers: Data and AI Governance
Maria attended this session.
This session challenged notions of data governance, particularly those focused on state/regulators/top-down models of governance, in an effort to seek (new) models that could address the impact of data and AI in society.
Linnet Taylor used examples of bottom-up norm-entrepreneurship, such as abolitionism, feminism and apartheid, to suggest that civil society could set the norms for governing technology in preferences to norms set by state governments. Drawing on the idea of “nothing about us without us”, Linnet described work of the Global Data Justice project to call specifically for new global norms that centre the voices of affected communities.
Stefan Baack suggested an analogy between “alternative (social) media” and “alternative data governance”. The former refers to media that challenges the media power by democratising the means of media production, whereas the latter seeks to empower data subjects in various ways and make data serve individual and collective interests.
Karine Gentelet and Sandrine Lambert shared some insights about the role of civil society in AI regulation global discussions. A trans-national context poses tensions inside the civil society ecosystem, fueled by language issues and limited time and resources, leading to decisions about potentially leaving some advocacy spaces (e.g. the Council of Europe) for more space for action, and pushing for ongoing participation or occasional consultation.
And lastly, Jef Ausloos explored how the GDPR could be used for collective empowerment. In a scenario of a schizophrenic role of the law vis-a-vis the transparency of digital infrastructures, the GDPR is not about privacy, but rather the structures that explore data. Therefore, data rights, lodging complaints with authorities, liability for redress regimes and expanding the list of actors legally authorised to petition could be useful mechanisms.
Papers: Datafied Health
Maria attended this session.
The session discussed the use of citizens’ data to inform health policies, and how such initiatives demand some contextualization and consideration of communities’ values, as well as social and economic specificities. What does it mean to be healthy in a given community? What does it mean to be wealthy (as a socioeconomic category) within a specific group of people? The answers to these questions will certainly change from one context to another, urging specific policy designs and posing different challenges to consent, data literacy, autonomy and sovereignty.
Papers: Data stewardship
Helena attended this session, which explored the relationship between theoretical frameworks and implementation in the case of data stewardship and data commons. The following notes are summarised from bullet points using ChatGPT.
The question of who determines the political agenda of data commons was raised. It was noted that data stewardship has different definitions and levels of participation, leading to a suggestion of creating an “Arnstein’s ladder” for data stewardship.
It was noted that UK government policy currently appears more focused on supporting development of AI systems, as opposed to empowering individuals and communities. The individual level of data stewardship was noted as commonly neglected, while potential harms were identified as more likely to occur at the group level. The concept of “epistemicide” was introduced, referring to the construction of a world based on a single knowledge model, excluding other perspectives.
There was a discussion of different ways of evaluating data stewardship approaches. The example of the Liverpool Data Cooperative was discussed, which focused on health and wellbeing and was funded by the mayor and hosted by a university. It was noted that the prevailing view of data as “oil” leads to an extractive model where good intentions are overshadowed by commercial interests, such as with mental health apps. The cooperative worked with organizations like Involve and Ada to co-design its governance, referencing the Ada Spectrum of Participation.
The DATALOG project in Barcelona, an NGO funded by the city, aimed to make city data useful and interoperable. Engaging communities was identified as challenging due to issues of interest and trust. While people are motivated by individualistic cost reduction, collective goods like the impact of climate change are less motivating.
The EU digital wallet was seen as an opportunity for data steward intervention. It would contain standard information about residents, utilizing Bitcoin ledger technology. Concerns were raised regarding the definition of a digital citizen and who the wallet caters to. Linguistic analysis revealed that control is associated with individual self-agency, while convenience is associated with interoperability across the EU. Fragmentation and differences within the EU were framed as negative, and trust was primarily framed in relation to corporations’ ability to reliably identify individuals and their resident status.