To better understand user preferences, we have to understand the effect of features in the choice architecture. This article provides an introduction on why we prefer some privacy nudges over others. Privacy nudges should be cognitively easy to process and not slowing down work processes. At the same time these nudges should be transparent and designed in a non-manipulative way, so that it is possible for everyone to notice the nudge.
Our private and work life shifts substantially to digital environments, and more and more decisions are made online. However, contradicting the economic theory of the homo economicus, individuals do not act entirely rational all the time. Especially in digital environments, individuals often decide irrationally and to their own disadvantage. In digital environments, individuals tend to share disproportionally more information compared to direct face-to-face communication.
Widespread analysis of personal data yields substantial innovation potential, economic value as well as more efficient working models. Yet, only 45% of people believe these technologies to improve their lives according to an Word Economic Forum article. In the same context, public concern about the potential risks that the availability of personal information entails, is growing. In particular, as the vulnerability to discrimination, commercial exploitation and unwanted monitoring is ubiquitous. Thus, providers of digital platforms are dealing with complex decision of how to collect, store and analyse personal data.
Mini-series on Privacy Nudging. The first article is Privacy Nudging Can Stop Irrational Decision-Making The second is Capturing the Complexity of Digital Privacy Nudging The third is Ethical Design Nudging Supports Autonomy and Transparency. The fourth article is Detangling the Architecture of Ethical Privacy Nudging. Article on Boosting: How Boosting can equip us to make better choices.
Responsible and privacy sensitive information systems constitute a key challenge – at the individual, business and the societal level. In the process of disclosing and collection data, a promising method to strengthen users’ privacy-sensitive decision-making is a dedicated platform choice architecture.
The design and development of digital platforms with privacy sensitive choice architecture features is complex, and many different aspects need to be considered. In this article we focus on which privacy nudge user prefer. To better understand these user preferences, we have to understand the effect of features in the choice architecture.
One approach to explain the underlying mechanism of nudges is stated by Daniel Kahneman’s dual process theory, which constitutes that individuals use two systems of thought.
System 1 represents our intuitions or our unconscious autopilot.
System 2 expresses itself through our conscious planning and control, which requires significantly more mental effort and time.
Both systems are active at the same time and usually work together smoothly. In everyday life though, individuals rarely have enough time and information to fully evaluate all alternatives with both systems. Instead individuals tend to deploy so-called heuristics (mental short-cuts). Heuristics are informal rules of thumb that reduce the complexity of decision making and thus represent abbreviations in decision making. Although heuristics are an efficient way to solve recurring problems, they can lead to systematic errors such as biases in information evaluation. For example, personal data is often disclosed carelessly because the risk of unwanted monitoring is mentally less tangible (availability heuristics).
These false conclusions are often systematic and thus predictable deviation from rational behaviour. At this point nudges come to play. Privacy nudging is a concept, that can enable the users of digital systems to make privacy sensitive decisions for their own data protection. Some studies provide evidence that users prefer features that are cognitively easy to process and do not slow down work processes. Examples for these kind of nudges can be default nudges as well as framing elements.
Default settings are very effective since individuals often do not adapt privacy settings to their needs, the default option (the status-quo) remains overly preferred (status-quo bias). In addition, the default option is used as a reference point. Each decision option is now weighed against this alternative, and the decision is influenced in this direction. In the following example pre-selected options are set as defaults predetermining which private data is shared.
Framing effects exist, when two identical alternatives influence the users’ decision-making behaviour differently due to their different presentation. For example, colored fonts draw attention to selected elements in order to emphasize certain decision alternatives. In the following example, contextual cues are presented. For instance, a red-colored button can indicate that a user is going to create a public channel that can be seen by all co-workers.
Alternatively, a green-colored button can indicate that a user is going to create a private channel. In this setting, only co-workers that are invited could join.
Interestingly, many users prefer red framing over green framing. A red signal may be cognitively closely linked to the action of “stop” and does not need much further interpretation by the individual. The green element may need more cognitive effort, as more interpretation is required for the decision-making process. Respectively, these kind of nudges tend to tackle system 2 thinking which is characterized by conscious planning and control.
Some studies suggests that in privacy-related decision-making users perceive nudges as more positive when the nudges require less cognitive work. It must be noted that if a nudge would address the automatic thinking system only, individuals may not escape the effects of this nudge and it can hardly be stated that individuals’ freedom of choice is unrestricted. In this case, the designed nudge would diminish the individuals’ autonomy. Hausman and Welch define autonomy as the degree of control an individual has over his or her own deliberations, conclusions, and decisions. Freedom in the sense of the options available for choice may be unchanged, while freedom in the sense of control over the decision is limited. Debating freedom of choice, some argue that certain nudge designs can manipulate people’s choices and behavior. Manipulation, unlike coercion, does not affect choices, but rather the way an individual comes to a decision, forms preferences, or shapes goals. It is therefore critical to design nudges that are easy to process and at the same time do not manipulate the user.
To counteract manipulation of users, an additional layer in the design should be considered. Choice architecture should focus on transparency. An action can be considered manipulative if it is not transparent. It can be stated that individuals should be informed about such interventions in order to preserve their autonomy and to protect them from abuse by nudges. Accordingly, nudges should be designed in a way that it is possible for everyone who is paying attention to notice the nudge and the intention of the choice architecture.
Parts of this post were first published in:
Barev, T. J. & Janson, A. (2019): Towards an Integrative Understanding of Privacy Nudging – Systematic Review and Research Agenda. Annual Pre-ICIS Workshop on HCI Research in MIS (Pre-ICIS).
Barev, T. J.; Janson, A. & Leimeister, J. M. (2020): Designing Effective Privacy Nudges in Digital Environments: A Design Science Research Approach. International Conference on Design Science Research in Information Systems and Technology (DESRIST) (bll 388–393). Springer, Cham – Vinton G. Cerf Award.
Schöbel, S.; Barev, T. J.; Janson, A.; Hupfeld, F. & Leimeister, J. M. (2020):Understanding User Preferences of Digital Privacy Nudges – A Best-Worst Scaling Approach. Hawaii International Conference on System Sciences (HICSS).
Barev, T. J.; Schöbel, S.; Janson, A. & Leimeister, J. M. (2021): DELEN – A Process Model for the Systematic Development of Legitimate Digital Nudges. Chandra Kruse, L., Seidel, S. & Hausvik, G. I. (Reds), International Conference on Design Science Research in Information Systems and Technology (DESRIST) (Vol 12807, bll 299–312). Cham: Springer.