Skip links

CPDP 2017: Ethics in the Age of Intelligent Machines

Analysis. Laws are not enough. We need ethically driven innovation in the Age of Intelligent Machines, was one of the key conclusions of the CPDP.  Here is first a summation of the intervention at one of the panels (Ethics, Observational Platforms, Mathematics and Fundamental Rights) that DataEthics participated in. Secondly, some highlights of the ethics discussions from the conference.

A few years ago a social media company decided to do some experiments on its users. Filling their news feeds with positive or negative stories, they were measuring hundreds of thousands of people’s emotional reactions. When the story surfaced there was of course a public out cry and the company found it self in a situation where it had to show that it cared. So a spokesperson apologized in public. However, she did’nt apologize because what they did had been ethically questionable, they were just sorry that it had been “poorly communicated”.

Legally you can do a lot with data right now, and a lot is done with data, that is not necesarilly in the best interest of the individual. And this is the point where we revisit ethics – when the laws, social awareness and formal systems in place are not enough.

A recurrent theme of this year’s Computers, Privacy and Data Protection Conferences (CPDP) in Brussels concerned the ethics of an Age of Intelligent Machines driven by invisible data processing, where people have no insight into the interests behind, no knowledge of how their data is used and with what consequences it has for them as individuals. In this IEEE initiative the core challenge of a society as such is described as an increasing data assymetry and information power imbalance. The American Professor Frank Pasquale refers to it as the Black Box Society governed by secret algorithms.

What ever name it goes by, it is a development that calls for an Ethos that circles around the concept of individual agency – the individual’s ability to think and act in ways that shape their own life trajectory. 

Facebook experiments on people happen because currently we don’t have the social and legal tools to manage the core risks of what some refer to as the Fourth Industrial Revolution. But we are getting there.

Just like we got there with the risks of the other industrial periods. Today, when a car company is caught in manipulating with data so that their cars pollute more than they are allowed to, the company is immediately forced to recall hundreds of thousands of cars, €15bn are in the blink of an eye wiped off the companys share price on the stock exchange, the CEO goes out publicly hands down and apologizes, and governments around the world call for action. This happens, not just because there is a legal requirement, but because there is a social demand.

One day (soon) we will have the tools to measure the impact on an individual’s agency of an unethical data exchange. Just like we have sophisticated tools to measure the pollution of a car. And we will have the tools to react to this. Legal and technical. We will have audit mechanisms to investigate the ethics of algorithms, we will have ethical design standards, ethical impact assesssments to mitigate the risks of data practices, and users will have their own tools to test a service, and they will know what to look for when choosing a service to trust. (and that will also be the day when we call it practice and law and stop referring to it as ethics)

Some services will provide us with a type of luxury ethics/privacy that we might have to pay for. And some services will still excell in unethical practices. But there will be the social awareness and tools to respond to this.

Fortunately, there is an emergence of businesses that see the business/competitive advantage (Hasselbalch, Tranberg, 2016) in embedding data ethical strategies in their practices, and that are positioning themselves with a view to this. They go beyond mere legal compliance and try to foresee and mitiage the risks of data processing within an environment that is generally driven by the collection and use of data. They embed user control and privacy in their services, in their management structure, human ressource departments and use it as part of their general brand and marketing development. They so to speak put human agency back into the equation.

Highlights from the CPDP Ethics Discussions

Laws are not enough. We need ethically driven innovation in the Age of Intelligent Machines, was one of the key conclusions of the CPDP. But how do we incorporate ethics in the design of services? What are the systems and tools that need to be in place to create ethical accountability and transparency? Here are some of the discussions and conclusions:

Dataethics.eu, Apple, Phillips and EDPS’ Digital Ethics Board’s Professor Peter Burgess  debate  the content of data ethics in a panel on Ethics, Observational Platforms, Mathematics and Fundamental Rights organised by The Information Accountability Foundation.

Professor Peter Burgess, EDPS Digital Ethics Board Chair, described the characteristics of a digital ethics as one that is embedded in cultural and social settings. Data will always be biased, he says, so will algorithms. Ethics goal is to understand this bias and to discern how much bias is acceptable.

Jane Horvath, Senior Director of Global Privacy at Apple describes how Apple builds in privacy and data ethics in their products. “We believe that data ehtics is an intrinsic part of privacy and is fundamentally about what is right and wrong”, she says. “Personal tech must also be the most private… your data is yours”. Apple works with privacy the way they do, not because it is legally required,“we do it because it is the right thing to do.”

Martin Abrams from the Information Accountability Foundation presents an ethical framework for big data use.

Dataethics.eu thinks that an ethical business evolution is actually possible. But it needs a value based system to evolve in.

The theme of the CPDP and the main challenge of AI: How do we govern and ensure accountability in AI systems?

How do we audit and get transparency in deep learning algorithms that are so mathematically complex that not even the designers understands how they make decisions, asks DataEthics.eu EPICs Marc Rotenberg.

Does the GDPR even have a right to explanation? The Allan Turing Institute has looked into the law text and says no it doesnt. It just has a right to be informed, says a researcher from the Institute in this debate on AI and Ethics (generally worth while watching in full length).

Professor Luciano Floridi from the Allan Turing Institute concludes that we need an AI watch dog.

Is the GDPRs provision on Privacy by Design (data protection by default) even possible in a machine learning system that evolves by the accumulation of data, asks UNs Privacy Rappoteur Joe C. Cannataci.
The European Data Protection Supervisor’s Digital Ethics Board are working on two reports on the topic. One will be published soon.

Several ethics debates with the digital ethics board on CPDP.

Last but not least, “data ethics” becomes the buzz word of the CPDP.

See all the videos from CPDP 2017