The way data is collected and used does not work for people or our communities
Data is constantly collected as we go about our lives and work
The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.
In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the 21st century just as industrial capitalism disfigured the natural world in the 20th. Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new “behavioral futures markets”, where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new “means of behavioral modification”. The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a “Big Other” operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff’s comprehensive and moving analysis lays bare the threats to 21st-century society: a controlled “hive” of total connection that seduces with promises of total certainty for maximum profit - at the expense of democracy, freedom, and our human future. With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future - if we let it.
From the Fourth Amendment to George Orwell’s Nineteen Eighty-Four, our culture is full of warnings about state scrutiny of our lives. These warnings are commonplace, but they are rarely very specific. Other than the vague threat of an Orwellian dystopia, as a society we don’t really know why surveillance is bad, and why we should be wary of it. To the extent the answer has something to do with “privacy,” we lack an understanding of what “privacy” means in this context, and why it matters. Developments in government and corporate practices have made this problem more urgent. Although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered. And even when they are, courts frequently dismiss challenges to such programs for lack of standing, under the theory that mere surveillance creates no tangible harms, as the Supreme Court did recently in the case of Clapper v. Amnesty International. We need a better account of the dangers of surveillance.
This article offers such an account. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I explain what those harms are and why they matter. At the level of theory, I explain when surveillance is particularly dangerous, and when it is not. Surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It ialso gives the watcher power over the watched, creating the the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.
At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public-private divide. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine.
74%
±4%
74%
±4%
of internet users in the US find targeted online ads invasive
Washington Post - Schar School
Results from a poll conducted Nov. 4-22, 2021, among a random national sample of US households, resulting in 1,122 adults including 1,058 internet users.
Data determines prices, the opportunities we are given and the decisions made about us
It widens the gaps between the privileged and the marginalised
A former Wall Street quant sounds an alarm on the mathematical models that pervade modern life - and threaten to rip apart our social fabric
We live in the age of the algorithm. Increasingly the decisions that affect our lives - where we go to school, whether we get a car loan, how much we pay for health insurance - are being made not by humans but by mathematical models. In theory this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.
But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable even when they’re wrong. Most troublingly, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy”. Welcome to the dark side of big data.
Tracing the arc of a person’s life, O’Neil exposes the black-box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set paroles, and monitor our health.
O’Neil calls on modelers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it’s up to us to become savvier about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
A new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism.
Today, data science is a form of power. It has been used to expose injustice, improve health outcomes, and topple governments. But it has also been used to discriminate, police, and surveil. This potential for good, on the one hand, and harm, on the other, makes it essential to ask: Data science by whom? Data science for whom? Data science with whose interests in mind? The narratives around big data and data science are overwhelmingly white, male, and techno-heroic. In Data Feminism, Catherine D’Ignazio and Lauren Klein present a new way of thinking about data science and data ethics—one that is informed by intersectional feminist thought.
Illustrating data feminism in action, D’Ignazio and Klein show how challenges to the male/female binary can help challenge other hierarchical (and empirically wrong) classification systems. They explain how, for example, an understanding of emotion can expand our ideas about effective data visualization, and how the concept of invisible labor can expose the significant human efforts required by our automated systems. And they show why the data never, ever “speak for themselves.”
Data Feminism offers strategies for data scientists seeking to learn how feminism can help them work toward justice, and for feminists who want to focus their efforts on the growing field of data science. But Data Feminism is about much more than gender. It is about power, about who has it and who doesn’t, and about how those differentials of power can be challenged and changed.
The power of big tech has a distorting effect on our economies
An independent report on the state of competition in digital markets, with proposals to boost competition and innovation for the benefit of consumers and businesses.
This is the final report of the Digital Competition Expert Panel. Appointed by the Chancellor in 2018, and chaired by former Chief Economist to President Obama, Professor Jason Furman, the Panel makes recommendations for changes to the UK’s competition framework that are needed to face the economic challenges posed by digital markets, in the UK and internationally. Their report recommends updating the rules governing merger and antitrust enforcement, as well as proposing a bold set of pro-competition measures to open up digital markets.
Social media and targeted advertising have turned democratic elections into digital battlegrounds
OxTEC: Ready to Vote: Elections, Technology and Political Campaigning in the United Kingdom
Oxford Technology & Elections Commission
OxTEC’s final report, by Phil Howard and Lisa-Maria Neudert, sets out a series of recommendations aimed at securing the information infrastructure of elections and creating a trusted environment for the democratic use of technology. The report highlights areas for immediate action for policymakers, political parties, industry and civil society and also sets out short-term and long-term recommendations.
OxTEC was formed by an alliance of stakeholders, including ComProp, to “explore how democracies can integrate democratic norms and practices into the use of information technologies, social media, and big data during campaigns, with the goal of protecting the integrity of elections.”
56%
±3.5%
56%
±3.5%
of people in the US think Facebook has a negative impact on society
Washington Post - Schar School
Results from a poll conducted Nov. 4-22, 2021, among a random national sample of US households, resulting in 1,122 adults including 1,058 internet users.
The environment is damaged by the collection and use of data
This paper addresses a problem that has so far been neglected by scholars investigating the ethics of Big Data and policy makers: that is the ethical implications of Big Data initiatives’ environmental impact. Building on literature in environmental studies, cultural studies and Science and Technology Studies, the article draws attention to the physical presence of data, the material configuration of digital service, and the space occupied by data. It then explains how this material and situated character of data raises questions concerning the ethics of the increasingly fashionable Big Data discourses. It argues that attention should be paid to (1) the vocabulary currently used when discussing the governance of data initiatives; (2) the internal tension between current data initiatives and environmental policies; (3) issues of fair distribution. The article explains how taking into account these aspects would allow for a more responsible behaviour in the context of data storage and production.
The growth of data publicly available on the internet has been a boon for biological science and conservation. But it is also being used by poachers and dishonest collectors to locate rare plants and animals and sell them illegally for a hefty price.
And we miss out on the public benefits of data
We could improve
medical treatments
We could reduce
energy usage
We could improve
transport infrastructure
Everyone should decide how their digital data are used — not just tech companies
Jathan Sadowski, Salomé Viljoen and Meredith Whittaker
In our view, the current model, in which the digital traces of our lives are monopolized by corporations, threatens the ability of society to produce the rigorous, independent research needed to tackle pressing issues. It also restricts what information can be accessed and the questions that can be asked. This limits progress in understanding complex phenomena, from how vaccine coverage alters behaviour to how algorithms influence the spread of misinformation.
We do not agree with the way services use data – we are simply resigned to it
73%
±3.5%
73%
±3.5%
of people in the US think how companies collect information to target ads is an unjustified use of people's private information
Washington Post - Schar School
Results from a poll conducted Nov. 4-22, 2021, among a random national sample of US households, resulting in 1,122 adults including 1,058 internet users.
The aim of this article is to propose a theoretical framework for studying digital resignation, the condition produced when people desire to control the information digital entities have about them but feel unable to do so. We build on the growing body of research that identifies feelings of futility regarding companies’ respect for consumer privacy by suggesting a link between these feelings and the activities of the companies they benefit. We conceptualize digital resignation as a rational response to consumer surveillance. We further argue that routine corporate practices encourage this sense of helplessness. Illuminating the dynamics of this sociopolitical phenomenon creates a template for addressing important questions about the forces that shape uneven power relationships between companies and publics in the digital age.