This article is written by guest writer Annesophie Eve Hørlyk based on her master’s thesis that was published in the IT University of Copenhagen in January 2020.
Informed consent is one of the primary means to protect internet users’ autonomy by allowing for informed decision making. Nevertheless, internet users rarely read privacy policies before they consent to terms and conditions, and this weakens the protection of the principle. When users don’t read privacy policies, they don’t become aware of the privacy issues that may result from their digital actions and nor are they encouraged to make privacy enhancing choices. In the following, it is outlined how behavioral design can help solving this problem.
Today, digital media are used to support several of our everyday tasks. They influence how we interact with each other, how we entertain ourselves, and how we organize our lives. This allows for extensive tracking of our behavior, which is why the Economist in 2017 proclaimed that the world’s most valuable resource is no longer oil, but data. On the one hand, it paves the way for ‘free’ services and better, customized user experiences, but on the other hand fundamental rights such as the right to privacy and to autonomy are tremendously challenged due to the absent choice of opting out of the tracking.
One of the primary means to protect internet user’s autonomy is consent, which provides people with the necessary information to make an informed decision before consenting to digital services’ terms and conditions. However, research indicates that most people never read privacy policies and consequently are unaware of the extent of data collection and the consequences.
While privacy policies are mainly formulated with the purpose of meeting some legal requirements, most often they are not formulated and organized in a way that makes them appealing. Since this could be part of the reason why internet users refrain from reading privacy policies, I found it relevant to investigate users’ opinion on privacy policies in order to specify how the policies should be formulated to increase usability and accessibility and to meet the requirements of the European General Data Protection Regulation (GDPR).
Main Findings
It is stated in article 12 of GDPR that communication relating to data processing should be made “in a concise, transparent, intelligible and easily accessible form, using clear and plain language”, which – in my opinion – hasn’t been the case in any of the examples that I’ve read. In accordance to this, the interviewees claimed that they don’t read privacy policies (although the users were keen on protecting their data) because they are too long, too unstructured and too hard to understand.
At first glance, when the participants were asked to scroll through Google’s privacy policies, they made positive statements about how the information was presented. However, when they were asked to find information on the purpose of Google’s data collection and to whom personal data are being shared, most of them were fairly challenged. As it turned out, the relevant information was placed in several locations in the text and they had to click on many links to find a comprehensive answer to the question, which left them with a very different impression of Google’s privacy policies.
Overall, the user research revealed that in order for privacy policies to be more user friendly, the information should be condensed, categorized properly, and appeal to user heuristics that allow for quick decision making. Furthermore, the focus groups demonstrated that not all of the legally required information in privacy policies is equally important to users, indicating that the load of information could be reduced in their initial encounter with privacy policies. On the basis of this, I redesigned Google’s privacy policies in alignment with GDPR and user preferences, and propose a new standard or a concept for the presentation of privacy policies.
The Concept
The concept is basically a short version of Google’s privacy policies, but since the short version lacks certain legally required details, it is suggested that companies should have both a “normal” and a short version of their privacy policies. Thus, the short version sorts privacy related information from the normal version into different predefined categories, visualized by icons (see picture below). And each category only summarizes the required information without any further explanations, which can instead be found in the normal version through a link. Moreover, rather than actively seeking out privacy policies before consenting as it is often practiced today, the short version should automatically pop up when users click on service providers’ call to actions implying that a consent has been given. Thereby, users should at least be able to get a quick impression of the consequences of their consent, especially due to the presence of icons that make the information easily decodable.
Making Privacy Information Accessible is only the First Step
Accessible information is not enough to ensure users’ autonomy in an online context and it does not guarantee a sufficient level of transparency. Starting with the latter, not all users are acquainted with the terms used in privacy policies and they may not be inclined to investigate the meaning of it. Even for people who understands the terms, it’s impossible to predict the long-term implications of the consent because the technologies used for data processing are in constant development and thus it may quickly lead to complete new ways of using data. Arguably, only few people may have foreseen that Cambridge Analytica would be able to have such a great impact on the American election in 2016. And something similar applies to the Russian search engine, Yandex, in which you can upload a photo of a person to search for the person’s name and other kinds of personal information. Yandex then crawls through public profiles and articles on Yandex that are associated to the face on the photo. Who would have thought that a few years ago when they shared their photos and personal information online?
Moreover, service providers may not make adequate descriptions of their data processing practices in their privacy policies, either to protect trade secrets or to control user perceptions, or because they are simply not able to fully understand the consequences of their practices. So even if privacy policies are condensed and ordered, they will most often still lack transparency, and they won’t solve the privacy problem.
For instance, a recent study (“Out of control”) from the Norwegian Consumer Council described how Google, to some extent, go against Android users’ attempts to protect themselves from tracking. This is done through the Google Play Android Advertising ID, which is a unique serial number assigned to each Android Device, allowing apps and ad networks to anonymously harvest data about a user without requiring special permissions. For apps and ad networks, this can be used to profile and track users across different devices and services.
The advertising ID can be reset and generate a new serial number, presumably to let people think they can limit the tracking. But if the advertising ID is transmitted with static data (e.g. IP-addresses, SSAID, IMEI, etc.) – which is often the case – third parties can combine the static data with the new advertising ID and keep tracking the user. Therefore, it is stated in Google’s terms of use for advertisers that “the advertising ID must not be connected to personally-identifiable information or associated with any persistent device identifier without explicit consent of the user”. However, as it is demonstrated in the Out of control study, a large number of third parties collects data that can be used to potentially track users who have reset their advertising ID’s without obtaining special consent.
This exemplifies some of the big issues with online privacy; in many cases it is impossible to opt out of the tracking, and it is almost impossible to comprehend the consequences of one’s consent due to the lack of transparency. It creates information asymmetry between companies and users, and many users may disregard privacy enhancing choices because they feel that it makes no difference anyway.
The concept in this study only represents a tiny fraction of what needs to be done to strengthen informed consent. It serves as an example of how privacy policies could be made more accessible and appealing to raise users’ awareness about data usage and the extent of online data collection. However, due to the lack of transparency and the fact that some internet users will choose digital services over privacy considerations, users should be granted with more control over their data in order for their privacy and autonomy to be respected. This calls for global settings on a device level, allowing for default opt out solutions in which apps and services have to ask permission each time they want access to specific categories of user data. In that way informed consent will not be limited to the fragile process of bombarding users with tons of privacy policies each time they wish to use a digital service.
Informeret samtykke online – speciale
Annesophie Eve Hørlyk recently graduated with her cand.it degree in Digital Design and Communication at the IT University of Copenhagen. She has a background within the field of communications and has worked as a web editor writing content, doing web design, and optimizing user experiences. During her studies, she specialized in UX and usability, but she has worked with a broad range of design tools and subjects, e.g. wireframing, prototyping, service design, interaction design, etc.”