It’s been a fairly quiet final week before the holidays, but Monday was eventful, with both a small roundtable with Shoshana Zuboff (author of Surveillance Capitalism) and some good funding news, which as usual I won’t give any details about until the contracts are signed!
The end of Surveillance Capitalism
My understanding of Zuboff’s thesis is that a combination of datafication and capitalist forces has led us to a problematic digital environment. It’s not just about a lack of privacy, it’s mis- and dis-information, online harms, the attention economy, and the undermining of democratic processes.
In her latest work, she argues that since the root of all these problems is the large-scale collection of behavioural (and other) data, this should be abolished. She wants to see lawmakers make the generation of data from our activities illegal.
I have three critiques of this. First, I think it assumes that all data is the same or used in the same way, when the harms that arise from social media are distinct from those that arise from other uses of data (even behavioural data). The emergent properties of recommendation engines being used on our social media feeds are different from those arising from predictive policing, or personalised pricing. I’m wary about treating them as the same problem.
Second, I think it is entirely unrealistic to imagine that the collection of data by social media companies (or social media itself) can or should be completely abolished. I believe there is a negativity bias in our conversations about social media, probably coupled with a fear of change, that leads us to focus on the harms that Zuboff discusses rather than the benefits. (I’d like someone to do one of those Brief History of People Don’t Want to Work Anymore threads to put some of these fears into perspective.) Yes, I think societies need to work our way through the implications of the rise of data, digital services, AI and social media, and how they should be regulated and governed, but I don’t think that means wholesale stopping or reversing.
In addition, there are some aspects of current data collection, most notably data about our networks but also our interests, that cannot be gotten rid of if we want to continue to use social media. Abolition is not a solution in those cases. And I would argue that there are positive public benefit uses, including from behavioural data, that we should aim not to lose as we aim for a more just, equitable and sustainable future.
Finally, it feels like there are two responses to the problem of the over-collection of behavioural data. One is to call for lawmakers to abolish it. But politicians are swayed by politics – such as a belief in the invisible hand – and pressures from tech firms, whether that’s a fear of losing jobs and opportunities for their citizens, tax revenues, or the reputation of being a tech leader; or more directly through lobbying and donations.
The other response is to call for individuals to quit Facebook and other social media platforms over their data practices, which should create market signals about acceptable behaviour and create change that way. But our lives, friends, communities and often livelihoods are there – we can’t leave, certainly not in numbers that those companies will care about.
I think the action we need from lawmakers is not about deciding what social media companies should and shouldn’t do, but to shift power towards the people who are affected by these technologies, and enable them to create a nuanced societal consensus about how data about us gets collected, used and shared. This week’s episode of The Rest is Politics included a segment on the French Citizens’ Assembly on end-of-life care. Perhaps, given their scale, a similar (binding) participatory and deliberative exercise should be run on the use of data by social media companies.
I also think, as I mentioned earlier, that there are many many cases of uses of data – in health, education, work, housing and so on – that aren’t about social media. The impacts of data are highly contextual. We can’t just say “no biometrics”, for example, and think that should apply in every circumstance (a nice example from the parliamentary roundtable we held was that if you’re sending an engineer off to an isolated location then it is good to monitor them for their health and safety). Making these judgments requires governance that includes the voices of those who are affected by it.
Just pragmatically, nothing is going to shift on the use of data by social media companies without US lawmakers taking action. But there is still a lot that can be done, now, about other uses of data, that arguably have an even greater impact on our lives, and particularly those of marginalised communities, such as in policing and justice, or recruitment, or schooling. For those of us not in a position to influence US lawmaking, I think we should focus on those.