Blog. Report from the 40th International Conference of Data Protection and Privacy Commissioners. Big tech, capitalising big on other peoples’ data, were scolded. Others hit the roof talking about saving democracy and humanity.
Facebook has at least 11 ways of describing a man’s beard. None. Chin strap. Chin curtain. Beard. Full beard. Mustache. Mutton chops. Soul Patch. Goatee. Stubble. Balbo. Put together with just as many ways of describing eyes, hair and skin, the human face is just one little puzzle in Facebook’s micro-detailed profiling universe. Hundreds of other details lie in our bodies and mind.
That was clear at the Facebook side event taking place at the 40th International Conference on Data Protection and Privacy Commissioners 2018. This year the theme was ‘Debating Ethics.’ With the side event, Facebook wanted to show how they work with ‘Ethics and Privacy by Design’ in artificial intelligence (AI). The beard game was to find out if humans are better at characterising humans than AI. Of course humans aren’t.
The title of Facebook’s side event was pretty insulting, as the company’s business model prevents them from being ‘private by design’. An important principle in Privacy by Design, PbD, is privacy by default, that is the default in a service is that you are not being tracked, that you don’t have to opt out of tracking. That default is not part of the Facebook service. Which also would be a disaster for Facebook, as most would then chose never to opt into tracking. However, the side event showed how Facebook – also with AI – continues to collect massive amounts of details about each and every user to target them with personalised messages from those, who will pay for it.
Still, Facebook’s CEO Mark Zuckerberg and another data monopoly Google, took up a lot of space at the official conference program. Both were keynotes over video and had a representative physically on the podium. Both claimed to be dedicated to privacy, and both said they support a much stronger data privacy law in the US. Google’s Senior Vice President on the podium, Kent Walker, said that building ‘free’ services for everyone is in itself ethical, as all income levels have access. A similar argument has often been heard from Zuckerberg.
The claim of supporting stronger data protection laws rings hollow knowing that both companies have lobbied fiercely against both the GDPR, e-Privacy (which has not been passed yet) anti-trust regulation and especially against a federal privacy law in the US. So, a lot of other companies could have spoken truely about data privacy and ethics such as Danish LEGO, Dutch TomTom, US-based Mozilla, German Cliqz (alternative browser), French Qwant (alternative search) and Swiss Wire (alternative chat). Just to mention a few.
The American Role Model
Fortunately there was one role model within data protection, US-based Apple. If you disregard Apple’s tax issues with the EU, it’s black box tendencies and the fact that it removed security apps from its China store, Apple is a role model in data protection. The CEO Tim Cook also arrived worthy as a King, first to his own cocktail party in the Apple Store, where the specially invited guest queued up to get 3-4 minutes attention from him, then the next day as a keynote speaker at the public conference:
“We see now how tech can harm rather than help. AI can magnify the worst of humans and undermine our shared sense of what is true and falls. This crisis is real. Those of us who believes in tech for good must ask ourselves a fundamental question: What kind of world to we want to live in?”
He bashed the big data companies without mentioning their names; “Data is being weaponised against us with military precision,” he said and stressed that to Apple, privacy is a fundamental human right.
“The desire to put profit over privacy is nothing new,” he said and concluded that: “Technology must be designed to serve humankind, not the other way round.” (Read here what Apple is actually doing regarding privacy).
Regulation Is Not Sufficient
It is the first time that digital ethics has topped the global agenda as it did in Brussels October 24th to 25th 2018. And as it is a faily new topic to most people, the conference discussions were abstract like a UN conference. But it was very clear that the world is now ready for data ethics. A roadmap and an international organisation behind the conference was established, and a ‘Declaration on ethics and data protection in artificial intelligence’ was adopted. Further, a majority of the participants supported the establishment of a global treaty on digital ethics.
“Not everything that is legally compliant and technically feasible is moral sustainable,” said Giovanni Buttarelli, European Data Protection Supervisor in his opening speech. He said that the GDPR only refers three times to ethical considerations in speci fic professions, like research. “This is not a criticism of the GDPR. It is a reality check on the limitations of any law, even a comprehensive one. Laws establish the minimum standard.”
He pointed out that we are at a turning point now, that we need a revolution – with possible victims – to develop a positive relation ship with new technologies which puts people – dignity – at the centre. Do read the whole speech here.
“But now we are also fighting against manipulation of you and me. Privacy is not only about the right to share pictures in public but also the right to decide who you want to share your data with.”
Berners Lee is behind the US-based start-up Solid, giving individuals a way to control their own data. It is very similar to the movement based in Finland and Estonia, MyData.org, who has even made a declaration on how to work with data with human beings at the centre.
Many harsh words were addressed at the abuse of personal data that both some states and some companies have been exercising the past decade to secure power and profit. Everything from profiling, manipulation, scoring, prediction and surveillance (so-called surveillance capitalism) to ethnics cleansing of minorities e.g. the Rohingyas in Myanmar.
No concrete definition or guidelines of digital ethics were launched, which might be why our DataEthics Principles and Guidelines were very well recieved and also used in the Creative Cafe, a workshop on the future of digital ethics.
Future Challenges
The Creative Cafe workshop pointed to a list of important issues we need to consider in the future, e.g.;
- The right to be offline. The disconnected luxury.
- A killswitch – a manual override button – to stop AI running amok.
- Education in critical thinking and ethics of all citizens but especially employees and children, so they can demand ethics of companies and governments.
- Development and support of new business models where personal data is not an intransparent way of paying for the product.
- Development of standards.
- Development of audits to make sure that what companies and governments are saying, they are also doing – verification by independent third parties.
- Promotion and political support of ethical models and venture capital.
- No ‘ethics washing’ – just saying it without doing it.
Many of the conference talks and discussions can be seen here