PlatGovNet 2023: Imagining Sustainable, Trustworthy, and Democratic Platform Governance
This week I had the pleasure of attending the 2023 edition of the annual conference held by the Platform Governance Research Network. Two panels, in particular, ended with conclusions regarding a collective approach to governance (both in data and platform) that have stuck with me.
In the “Justice in Platform Governance” panel, we discussed the limitations of current accountability and oversight initiatives, and how the need for users’ inclusion and participation in platform decision-making seems to be a consensus. But how to push for participatory governance? Big tech companies have created monopolies, through not only their political and economic power but also societal pressure. As someone who does not have an Instagram account, I have learned to not tell people about this choice, hoping to avoid intense follow-up questions and judgement. Choosing to go to a family-owned restaurant as opposed to a big chain one does not spark a similar reaction. When the only two options citizens have are (1) joining social media or (2) being excluded from certain social interactions and even professional opportunities – my decision to create a Twitter account was based solely on professional and educational reasons – their bargaining power to demand a seat at the governance table is jeopardised. Even the threats of reputational damage seem to be losing its efficiency – the Cambridge Analytica scandal has made how many people quit Facebook and Instagram? As pointed out by Jef Ausloos (University of Amsterdam) in another panel, transparency rights are on a rise, but we should not rely solely on them as they, oftentimes, help cement power dynamics.
The “Empirical Research” panel discussed the use of data access rights as an opportunity for platform observability: “the right of access plays a pivotal role in enabling other data rights, monitoring compliance and guaranteeing due process.” Jef Ausloos (University of Amsterdam), Pierre Dewitte (KU Leuven) and Cristiana Teixeira Santos (Utrecht University) presented an interesting framework to look at “data (access) rights” both as “methodology” and as “pedagogy”. The former sees these rights through the lens of how to ensure them as a way to foster transparency: the features (detailed information, versatile, legally enforceable, free, electronic format), limitations (security and privacy concerns, data quality, ethical concerns, reproducibility and non-/pseudo-compliance), and current initiatives to do so (data donations).
On the other hand, data access rights as “pedagogy” provides a sense of the behaviours, methods and skills needed to explore these rights, such as exploring, engaging with and developing critical awareness of digital infrastructures, and experiencing the law in practice. It is an important lens to help civil society reflect on how to employ their efforts to foster and demand transparency.
According to the researchers’ empirical work, the use of data access rights and the information received from platforms highlighted, amongst other things, the relational aspect of data, as data subjects would receive, along with their own personal information, information concerning other people. The recent report from the French data protection authority about how white, middle-aged, well-educated, wealthy men are usually the ones making use of their data protection rights in country got me thinking about how, in practice, these individual rights are only serving a limited and privileged part of society while jeopardising everyone else’s privacy. Moreover, according to Rohan Grover (University of Southern California), the fact that platforms receive very few data access requests creates a scenario of little scrutiny and few opportunities for companies to make their processes better.
Last week, the already hot topic of AI gained a new chapter when the Future of Life Institute called for a 6-month moratorium on “giant AI experiments” to give society “time to adapt” to the risk concerns. It made me think of the 2019 Black-ish episode entitled “Feminisn’t”, where a white female character says women would not back down from the fight for equality for all women, “not while we’re all living through the worst thing that could ever happen to women in this country”, referring to the Trump administration in the U.S. Tracee Ellis Ross’ character replies “I mean, these are really bad times for women, but it’s not the worst thing to ever happen in our History”, leaving the other white women confused. She then has to clarify “slavery, because Black women were slaves”. The longtermist approach of the Future of Life Institute’s call, like this episode, seems to overlook the marginalised communities who are already being negatively impacted by the employment of these technologies.
On its turn, the Distributed AI Research Institute (DAIR) issued a response highlighting these already current issues and calling for regulation to address the lack of transparency and power imbalance that underlie the operations of the companies behind the creation and implementation of AI. Looking at the list of subscribers of the Future of Life Institute’s letter (such as Elon Musk), DAIR’s point reminded me of the way some big tech companies responded to the Gender Shades project regarding facial recognition bias: the first company to announce it would no longer offer facial recognition was, interestingly enough, the one falling behind its competitors in that market. Apart from taking advantage of current regulations to exercise their power, it seems to be worth acknowledging that tech companies are now also adopting a “human rights language” to shape narratives and practices regarding technology.
Tim’s blog post focusing on considerations for this language to continue to be humans’ (and not companies’) mother tongue is worth the read.