Skip links

Zuboff: We must Protect our Free Will and Democracy with Law

When talking about data collection and profiling of tech companies, people tend to think: “It does not concern me, I have nothing to hide.” The issue, however, is not about protecting secrets. It is about protecting our free will and democracy. Such are the words from author and professor Shoshana Zuboff whom visited Denmark recently.

According to Zuboff the problem have evolved over time. First companies would sell predictions about our clickstream and make predictions about where on a homepage to put an advertisement. Then they evolved the business model and started to monitor our mood and emotional state to predict when to expose us to advertisement. The latest thing is the manipulation of our physical whereabouts. A game like Pokémon Go nudges and incentives us to go to certain places and with the spread of IoT we will only experience more of this kind of manipulation in the future.

Threat to our free will and democracy
With data on our behavioural patterns and the access to our attention – through apps on a phone always with us – it is now possible for tech companies, to influence, not only our clickstream, but our feelings, opinions and physical behaviour. The access tech companies have to data about their users that can be used in political campaignes to target individuals as a way to influence the election. We have seen this in the Cambridge Analytica scandal where data about Facebook users were used in campaigns in the American election and the Brexit election, both in 2016.

Profitable business model
Zuboff explains that the selling of predictions of human behaviour is a profitable business model. It allows new tech startups to quickly become profitable and ensure investors. Companies that do not use this business model are often left behind and struggle to get funding. The market therefore favours the creation and growth of companies using a business model of selling data about their users.

Why Data Privacy is not enough
Even the demand for data privacy and data ownership for the users, we are experiencing right now, will – according to Zuboff – not be enough to protect free will and democracy in the future.
She points out, that providing people with the right to own and govern their own data does not prevent the business model of selling predictions about human behaviour, since companies still are allowed to sell data, they own about us acquired through collection or buying directly from us.

As long as this business model is legal, we will continue to see the sale of predictions about human future, with the intentions of manipulating individual and populations, she says.

Law is the solution
The solution is to regulate the market with law, says Zuboff.
As she points out, once it was possible and legal to base your business on the work of slaves or children, but we evolved as a society and decided that it was necessary to regulate the market for the sake of human protection. In the same way;

It is illegal to trade in human organs and according to Zuboff it should also be illegal to trade in the prediction of human future

So even with the focus we now have on data ownership, we must not forget the power of legislation as a means of protecting our democracy and the free will of individuals. The law must be updated to keep humans and our democracy protected, not only now, but also for future generations.

In the paper Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics by Jacob Metcalf, Emanuel Moss and Danah Boyd the authors investigates the role and pitfalls of ethics in big cooperate firms in Silicon Valley and how underlying logic in the industry influences how ethics is understood and executed.

According to the authors ethics in the tech industry is based on the underlying logic of meritocracy, technological solutionism and market fundamentalism. Meritocracy is a form of government that argues that power should be given to those with the highest skills and qualifications. Technological solutionism trusts innovation and technological solutions to solve broad social problems and market fundamentalism reason that a free and unregulated market can solve social and economic problems.

They argue that ethics is seen by many tech workers as something that arises from imperfect products and is not understood in a social context as something that structures social life. For tech people the apparent solution to ethical problems is therefore technology in the form of improved products, not changes of fundamental structures in the organization or industry. Skilled tech workers have qualities and abilities to best solve technological problems and they therefore perceive themselves as the most qualified to solve ethical challenges in the industry. They use this understanding to dismiss critique from people who do not understand technical details, like ethical scholars and members of government.

This places ethics in the practices of tech workers, not in the social world they are building products for, and tech companies seek to materialize ethics in form of checklists, protocols, evaluating metrics and best practices to eliminate risks. Ethics owners have been hired to develop these ethical practices, but they are often limited to suggesting changes that does not negatively affect the bottom line of the company, since their advice otherwise might not be followed. To give the responsibility to knowledgeable tech worker and ethics owners can allow failures to be placed on individuals rather than institutions. As the authors points out, a tech workers’ understanding of broader social problems can at best be partial and their power and influence within a company limited.

The paper states that tech companies involve themselves in ethics to avoid increased external criticism and governmental regulation, but they are at the same time pressured by the board and investors to focus on profit. The lack of regulation might lead some companies to conduct unethical practices as long as they are profitable and small companies are limited by investors to put ethics in front and center. To big companies, ethics become something to prevent scandals with, as one of the interviewees in the paper put it: “ethics … never makes you money but ethics can save you a lot of money.”

Ethics are thus put in a context of saving money, avoiding scandals and making the best product and is seen as something to implement, not something that challenges the way an organization is designed and essentially work. Since ethics is not perceived as a social phenomenon, something that structures society, it is at the risk of becoming merely a performance or procedure, not an enactment of responsible values. The authors worry that companies might learn to speak and perform ethics without allowing it to change fundamental structures in the industry:

“If ethics is simply absorbed within the logics of market fundamentalism, meritocracy, and technological solutionism, it is unlikely that the tech sector will be able to offer a meaningful response to the desire for a more just and values-driven tech ecosystem.”

Signe Agerskov is a member of the European Group on Blockchain Ethics (EGBE) and is researching blockchain ethics at the European Blockchain Center