{"id":9030,"date":"2020-12-01T17:40:21","date_gmt":"2020-12-01T17:40:21","guid":{"rendered":"https:\/\/dataethics.eu\/?p=9030"},"modified":"2020-12-01T17:40:23","modified_gmt":"2020-12-01T17:40:23","slug":"culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda","status":"publish","type":"post","link":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/","title":{"rendered":"Culture by design &#8211; a data interest analysis of the European AI policy agenda"},"content":{"rendered":"<p><strong>Abstract<\/strong><\/p>\n<p>This article investigates a moment of the big data age in which artificial intelligence became a fixed point of global negotiations between different interests in data. In particular, it traces and explicates cultural positioning as an interest in the artificial intelligence momentum with an investigation of the unfolding of a European AI policy agenda on trustworthy AI in the period 2018\u20132019.<\/p>\n<p><em>The article was first published in the open access journal <a href=\"https:\/\/firstmonday.org\/ojs\/index.php\/fm\/article\/view\/10861\/10010\">FirstMonday<\/a>. <\/em><em>To cite: Hasselbalch, G. (2020). Culture by design &#8211; a data interest analysis of the European AI policy agenda. First Monday, 25(12). https:\/\/doi.org\/10.5210\/fm.v25i12.10861<\/em><\/p>\n<p><strong><a name=\"p1\"><\/a>Introduction<\/strong><\/p>\n<p>At the end of the first 20 years of the twenty-first century, artificial intelligence technologies (AI) <a name=\"1a\"><\/a>[1] came to be at the center of a global public debate on policy, media and industry. From transpiring as a scientific endeavor and sci-fi curiosity, AI had transformed into socio-technical systems with rapid and broad societal adoption and consequently a fixed point of governance in the European Union. EU legislators had just implemented a momentous data protection law reform to address challenges of a big data digitalization of societies, and on a global scale, states and companies alike were carving their space with more or less aggressive data harvesting advances, while citizens were toiling to understand their own role in emerging big data technological environments. Against that background, a European AI strategy was published in early 2018 by the European Commission and further developed in policy and expert group initiatives over a two-year period with a growing emphasis on \u201cethical technologies\u201d and \u201ctrustworthy AI\u201d.<\/p>\n<p>This article traces and explicates \u201cculture\u201d as an interest in a societal AI momentum with an analysis of the European AI policy agenda as it evolved in the period 2018\u20132019, focusing in particular on the work of a high-level expert group on AI set up by the European Commission to inform the AI strategy. The article\u2019s analysis focuses on events, documents and statements that have contributed to the development of an official AI agenda in Europe and is informed by the author\u2019s active participation as a member of the high-level group. Predominantly, the European AI agenda is examined as a component of a general process of value negotiations in a global environment. Indeed, the evolving agenda was from the outset explicitly framed as a European \u201cthird way\u201d in what was dubbed in public discourse the \u201cglobal AI race\u201d between Europe, and the U.S. and China.<\/p>\n<p>The term AI was used in public policy-making and discourse in the 2010s generally to describe the next frontier in big data society. AI was developed, designed and used by all types of societal stakeholders to make sense of large amounts of data, predict patterns, analyze risks and act on that knowledge to make decisions within politics, culture, industries, and on life trajectories. In essence, the popular use of the term came to denote a particular advanced complex design of big data systems, automated, goal-oriented, perceptive, reasoning and made powerful by complex data acquisition and processing. Thus, above all, the article investigates an institutionally framed cultural positioning as an interest in data understanding AI as complex data processing systems and data design that forms the locus of societal power dynamics. As such, it does not seek to predict the path of AI adoption as this will be shaped by a much broader sum of actors, interests and conditions, which includes the formally mitigated consequences of law, policy and institutional practice as well as the unintended outcomes of people\u2019s (users, engineers, etc.) practices (Epstein, <em>et al.<\/em>, 2016).<\/p>\n<p>Theoretically, the article is grounded in a discussion of the role of culture and interests in the development and governance of socio-technical systems. It builds on conceptualizations of culture, power and technologies in cultural studies, applied ethics and science and technology studies (STS). In combination these perspectives treat technologies as dynamic concepts constantly in negotiation with human, societal and cultural factors. The understanding is that while technological artefacts may impose on humans and human societies, humans simultaneously impose on technology, and we may choose to do so with intention and direction. We create laws, policies and standards; we educate and program, hack and revolt. This is an important view on technological development and change as it empowers human governance efforts when considering the multiple human and non-human factors that shape the direction of a technological development.<\/p>\n<p><strong>The European AI agenda: Sculpting the cultural interest in AI<\/strong><\/p>\n<p>Since the 1980s, AI\u2019s adoption in society has progressed from the rule-based expert systems encoded with the knowledge of human experts to systems that evolve and learn from big data in digital environments with increasingly autonomous decision-making agency and capabilities (Alpaydin, 2016). In the 2010s, socio-technical data infrastructures enhanced by AI software systems to autonomously, or semi-autonomously, perceive and interpret their environments were worldwide increasingly embedded in private and public sectors in health care, security, finance, emergency, defence, e-government, law, transportation and energy. The U.S. had been a first mover in terms of global capital investment in AI as well as in the development of an AI ecosystem, and China rapidly followed suit (Merz, 2019). In Europe, an increasing number of examples of socially challenging applications of AI from these regions had been in the public limelight, for example, the use of a biased sentencing software in the U.S. judicial system (Angwin, <em>et al.<\/em>, 2016) or the mass citizen social credit scoring system of China (Kobie, 2018). But gradually the social implications of AI used in European settings were also edging into public awareness as a component of decision-making in many different sectors [2]. The European Union was, for example, proposing and adopting initiatives to establish smart border management systems and to integrate instruments for data processing and decision-making systems in asylum and immigration and law enforcement cooperation. In Europe there were also examples of experimenting with frameworks for automating detection and analysis of terrorist-related online contents and financing activities. At the same time, individual member states were toying with AI for predictive policing, public administration of benefits, tracing vulnerable children, for tax collection purposes and even for social scoring, while private sector examples included most profoundly AI in banking and insurance (Spielkamp, 2019). As such, AI had become the center of negotiations between different societal interests.<\/p>\n<p>It was in this setting that the contours of an institutionally framed European AI agenda took shape as a distinctive cultural positioning with an emphasis on \u201cethical technologies\u201d and \u201ctrustworthy AI\u201d. It was spelled out in core documents and statements in a process that involved European members states, a European high-level expert group on AI, a multistakeholder forum called the European AI Alliance and the European Commission. EU decision-makers were recognizing that AI had become an area of strategic importance, transforming critical infrastructures in all the aforementioned sectors and was therefore also a driver of economic development. And the EU\u2019s AI approach was on those grounds defined as a policy investment in ensuring Europe\u2019s competitiveness on a global scale by, for example, increasing annual investments in AI development and research and establishing an agreement to join forces with national strategies on AI in member states. Thus, the AI agenda was also often described in this period as a response to a \u201cglobal AI race\u201d in the public media, debates and reports. The main focus here was the competition among regional players for global leadership on the resources for AI (<em>e.g.<\/em>, data access), capital investment, AI technical innovation and practical and commercially viable research and education as well as \u201cethics\u201d as a form of risk mitigation and regulation (Merz, 2019). Here, I propose that besides a race for resources, technological supremacy and risk mitigation, the explication of values-based cultural frameworks for AI played a key role.<\/p>\n<p>The European Commission published its first communication on artificial intelligence in early 2018 (European Commission A, 2018), which was accompanied by a declaration of cooperation on artificial intelligence signed by 25 European member states (European Commission B, 2018) (which was later in 2018 concretized in a Coordinated plan on artificial intelligence, \u201cmade in Europe\u201d, European Commission C, 2018). This first communication presented a general initial European approach to AI with a focus on cooperation among member states, multi-stakeholder initiatives, investment, research and technology development. Above all, AI was at this point described as part of a European economic strategy within a global competitive field. While it was not a core strategic element of this first communication on the topic, a values-based positioning was also offered: \u201cThe EU can lead the way in developing and using AI for good and for all, building on its values and its strengths.\u201d (European Commission A, 2018) and a first step to address ethical concerns was made with the plan to draft a set of AI ethics guidelines.<\/p>\n<p>Following this, a European high-level expert group on AI was established in June 2018, with 52 selected members consisting of individual experts and representatives from different stakeholder groups. Their mandate was to develop AI ethics guidelines and policy and investment recommendations for the EU. From the outset, the group\u2019s work was framed in terms of a distinctive European framework. For example, at the group\u2019s first meeting in Brussels in June 2018, a European Commission representative responded to a comment regarding Europe\u2019s competitiveness: \u201cAI cannot be imposed on us\u201d, and it was concluded that \u201cEurope must shape its own response to AI\u201d [3].<\/p>\n<p>Notably, the \u201cEuropean response\u201d was already here defined in terms of what was presumed to be a set of shared European values. For example, at the same meeting, the chair introduced the core constituents of the group\u2019s mandate and the European Commission\u2019s expectations to the group as follows: \u201cIt is essential that Europe shapes AI to its own purpose and values, and creates a competitive environment for investment in AI\u201d [4]. A decree that was later included in the discussions of the group and defined as the search for a distinctive European position in a global setting: \u201cDiscussion also centred on identifying the uniqueness of a European approach to AI, embedding European values, while at the same time identifying the need to operate successfully in a global context\u201d [5].<\/p>\n<p>The ethics guidelines published a year later in April 2019 were also outlined on the basis of \u201cEuropean values\u201d. Values were introduced in this document with reference to the European Commission\u2019s vision to, among others, ensure \u201can appropriate ethical and legal framework to strengthen European values\u201d [6]. The key references here were the European legal frameworks, such as the Charter of Fundamental Rights and the General Data Protection Regulation. However, European values were also encompassed in one unifying ethics framework defined as the \u201chuman-centric approach\u201d in which the individual human being\u2019s interests prevail over other societal interests: \u201cThe common foundation that unites these rights can be understood as rooted in respect for human dignity\u2014thereby reflecting what we describe as a \u2018human-centric approach\u2019 in which the human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields\u201d [7].<\/p>\n<p>Yet it was the delineation of a specific type of technology design and culture of AI practitioners which in the end became the ethics guidelines\u2019 unique cultural positioning. Several ethics guidelines for AI had in 2019 already been published in European member states, outside Europe and by international organizations. Most notably, only a few months after the high-level expert group\u2019s ethics guidelines were published, 42 countries adopted an Organisation for Economic Co-operation and Development (2019) recommendation that included ethical principles for trustworthy AI. Though in comparison with other more principle-based ethics guidelines, the high-level expert group\u2019s ethics guidelines were particularly focused on the operationalization of ethics in the design of AI, that is, on framing the practice of building AI and hence providing concrete and practical guidance to AI practitioners. Europe was consequently also described in the guidelines as a potential leader in the development of \u201cethical technology\u201d, with a call to create a very specific approach to the design of AI. As such, ethics and values were considered to be a property of technological design and practice, and in addition to deployers and users of AI, in the guidelines practitioners were urged to implement and apply seven ethical requirements that were supplemented with an assessment list with concrete questions to guide AI practitioners.<\/p>\n<p>During the process of developing the ethics guidelines, the title of the work changed from \u201cTrusted AI\u201d to \u201cTrustworthy AI\u201d [8]. While this might be conceived of as a change primarily in semantics, the transformation in fact built on core discussions at group meetings centered on the inherent values of AI design. Accordingly, that title mirrored the conclusion of the group discussions, which was that AI technologies should not just be trusted, the EU needed to ensure that trustworthiness was built into the \u201ctechnology culture\u201d of AI innovation. As stated in the report from the first workshop of the high-level expert group, \u201cTrusted AI is achieved not merely through regulation, but also by putting in place a human-oriented and ethical mind-set by those dealing with AI, in each stage of the process\u201d [9].<\/p>\n<p>In this way, Trustworthy AI came into being as the European \u201cthird way\u201d in AI innovation. This also meant that when working with the policy and investment recommendations that were published in June 2019, the high-level group proposed Trustworthy AI as a core European strategic area (HLEG B, 2019). Hence, the recommendations emphasized the leveraging of European \u201cenablers\u201d for Trustworthy AI, for example, by providing human-centric AI-based services for individuals, making use of public procurement to ensure trustworthy AI, integrating knowledge and awareness, updating skills among policy-makers, work forces and students, developing a research university network on AI ethics and other disciplines necessary to ensure trustworthy AI across Europe, providing legal and technical support to implement trustworthy AI and mapping legal frameworks and creating new laws where the risks were considered high (<em>e.g.<\/em>, when AI is used in the context of mass citizen scoring or autonomous weapons). Even recommendations were made to develop a European AI infrastructure based on personal data control and privacy (HLEG B, 2019).<\/p>\n<p>Alongside the high-level expert group\u2019s development of a set of ethics guidelines and policy and investment recommendations on AI, the way in which a European ethics and values-based approach to AI was addressed by the European Commission also transformed from a brief \u201cconcern\u201d in a political strategy (European Commission A, 2018) into a strategic point of positioning. Nathalie Smuha, who was the coordinator of the high-level group, has described how the work of the high-level group was quickly adopted within the European Commission\u2019s general AI strategy (Smuha, 2019). As she explains, the European Commission at that time counted around 700 active expert groups, such as the high-level expert group on AI, that were tasked with drafting opinions or reports advising the Commission on particular subjects. However, their input was not binding, and the Commission was independent in the way it took into account the groups\u2019 advice and expertise. For example, only rarely did they become the core topic of a Commission communication [10]. Nevertheless, when the high-level expert group presented the ethics guidelines to the Commission in March 2019, an almost immediate agreement was reached to publish the last communication in the two-year period \u2014 \u201cBuilding trust in human-centric AI\u201d (European Commission, D, 2019) that stated its support for the seven key requirements of the guideline and encouraged all stakeholders to implement them when developing, deploying or using an AI system [11]. This culminated in the promise of a new president of the European Commission, Ursula von der Leyen at the end of 2019: \u201cIn my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence\u201d [12].<\/p>\n<p><strong>Culture and technological change<\/strong><\/p>\n<p>How can we explain a forceful explication of cultural values as a strategic interest in the face of technological change? Early in the history of the introduction of computers in society, one of the pioneers within applied computer ethics, James H. Moor, described in his famous essay \u201cWhat is computer ethics?\u201d the policy vacuums that emerge when policies clash with technological developments that force us to \u201cdiscover and make explicit what our value preferences are\u201d [13]. He predicted that a computer revolution of society would happen in two stages marked by the questions we ask. In the first \u201cintroduction stage\u201d we will ask the functional questions \u2014 how well does this and that technology function for its purpose? In the second \u201cpermeation stage\u201d, when institutions and activities are transformed, we will start asking questions regarding the nature and value of things [14]. The historian of technology Thomas P. Hughes similarly detailed the general developmental phases of large evolving and expanding technological systems from invention, development, innovation, transfer and growth, to competition and consolidation (Hughes, 1987, 1983). Hughes refers to \u201ca battle of the systems\u201d in which an old and a new system exists at the same time in a relationship of \u201cdialectical tension\u201d [15]. The phase of competition and consolidation is therefore also a moment of conflict and resolution not only among engineers but also in politics and law [16]. In these moments of conflict, critical problems are exposed, different interests are negotiated and finally gathered around solutions to direct the evolution of the systems. A new system, or the transformation of the old system, then evolves out of the very problems identified and solved in this phase. Unlike Moor, Hughes does not consider these moments of explication as solely induced by the transformative character of the technological systems. He considers their negotiation in complex social spaces. In fact, he holds that technologies themselves are intertwined with social, economic and cultural problems [17]. That is; in an STS perspective on technological change, such as Hughes\u2019, large technical systems are sociotechnical, meaning that they are not just material and technical but also represent complex power dynamics between multiple actors and societal interests. Therefore, they cannot be explained with a focus on technical innovation or the engineering of materials only, as they are integrally part of society at large.<\/p>\n<p>As follows, to explain the socio-technical shape of the AI momentum of 2018\u20132019, we need to consider it as something more than just technically innovative, practically implementable and economically viable. We may describe it as \u201ccultural\u201d. To do this, we need some additional perspectives.<\/p>\n<p>In a cultural studies perspective, culture is not just one facet but multifaceted \u2014 informally and formally created by and in interaction with people and artefacts \u2014 and the meaning of these cultural relations are in constant contestation and social negotiation. The founding Marxist scholar of the British cultural studies tradition, Raymond Williams, for example famously defined culture as \u201cshapes\u201d, a set of \u201cpurposes\u201d and \u201cmeanings\u201d that are expressed \u201cin institutions, and in arts and learning\u201d and in \u201cordinary\u201d practice [18]. Accordingly, culture is \u201ca whole way of life\u201d [19]. It consists of prescribed dominant meanings and, more importantly, also the negotiations of these. The meaning of culture is in \u201c(&#8230;) active debate and amendment under the pressures of experience, contact and discovery\u201d [20], and as such it is simultaneously \u201ctraditional\u201d and \u201ccreative\u201d. Hence, there are two sides to culture: \u201c(&#8230;) the known meanings and directions, which its members are trained to; the new observations and meanings, which are offered and tested.\u201d [21]. In this perspective, culture is a site of power negotiation.<\/p>\n<p>We may continue here and think of cultural power negotiations in the context of technological development and innovation. Here, culture, or the \u201ccultural\u201d, can be traced in the very design of technology. Hughes defines technological culture as a complex composite of socially embedded interests, goals and intentions [22]. Famously he held that technological systems do not become autonomous by themselves but require momentum, which depends on the interests (the culture) of the organizations and people invested in the system [23]. He mentions a few of these that were invested in the development of the modern electric power system that we might also recognize as stakeholders in the AI momentum of the 2010s: \u201cManufacturing corporations, public and private utilities, industrial and government research laboratories, investment and banking houses, sections of technical and scientific societies, departments in educational institutions, and regulatory bodies &#8230;\u201d [24]. He contends that differences in \u201ctechnological styles\u201d became particularly apparent in the twentieth century due to the increasing availability of \u201cinternational pools of technology\u201d (including, <em>e.g.<\/em>, international trade, patent circulation, the migration of experts, technology transfer agreements and other forms of knowledge exchange) [25]. Accordingly, he argues that technological style is the language of culture, so to speak, or it is, as he says, an \u201cadaption to environment\u201d [26]; that is to say, culture is the sum of \u201csystemized knowledge\u201d created in interaction with the economic and social institutions involved.<\/p>\n<p>This view is characteristic of an STS perspective on the cultural components of technology development. Here, I will not get into debates regarding culture as the epistemological weight on the scale of social constructivism and relativism or technological determinism and natural realism in studies of science and technology (<em>e.g.<\/em>, as represented in the debate between Callon and Latour [1992] and Collins and Yearley [1992]). That is, although I recognize that culture is a contested concept, it is more generally in STS related to <em>the way<\/em> we get to know things and the skills and resources we use to create a technology. We might say that distinct \u201cknowledge cultures\u201d or \u201ctechnological cultures\u201d are the foundations of a technology\u2019s design and adaption in society. Andrew Pickering, for example, describes culture as the resources that scientists use in their work or a shared conceptual field [27]. Harry M. Collins defines cultural skills as intents and purposes and sets of rules of action for the design of a technology [28]. They are the inexplicable or \u201chidden\u201d components of technology development [29]. He also argues that these implicit cultural skills of technology practitioners transform when they are made explicit and that this transformation of skills depends on changes in a \u201ccultural ambience\u201d that is \u201cenmeshed in wider social and political affairs\u201d [30].<\/p>\n<p>The concept \u201cdata cultures\u201d can here be used to illustrate the cultural variations of the different technological \u201cstyles\u201d of the way in which data is managed and treated in technology design. These various styles could be described as the \u201ctechnological cultures\u201d of data design based on shared skills and knowledge frameworks for data technology practitioners, implicit, for example, in ideals about the big data value for technology development [31] and also explicitly described in data protection laws or ISO standards, such as the 27701 on how to create privacy information management systems (PIMS). Thus, the very practices of data scientists and designers can be said to be framed within specific cultural systems of meaning-making and accordingly the practice of developing a data system and design a cultural practice: \u201cshaped by ideas about the cultivation and production of data that reflect epistemologies about, for example, ordering, classification, and standards\u201d [32]. Accordingly, we may also argue that the very data design of a technology has cultural properties that can be examined as a culturally coded system. For example, AI is not just \u201ccoded\u201d data; it is <em>data culture in code<\/em>. As such, the AI system\u2019s data design, or any data design, is culture in action. For example, as outlined by Collins (1987) in his description of cultural skills and AI, culture is in expert systems transformed into explicated categories, literally coded, and in advanced self-learning systems, it is even encoded within the systems when autonomous machine predictions and decisions are made; that is, the cultural classification of the world is actively coded and produced within the system.<\/p>\n<p>To conclude, technology development is enmeshed in cultural spaces that can be depicted as the epicenter of interest negotiations. Notably, Hughes illustrated how each developmental phase of a technological system produces, a specific \u201cculture of technology\u201d, which is the sum of this complex set of interests. The technology culture is therefore, according to Hughes, also the basis of a momentum of a technological system, and, importantly, competing cultures must convert to the dominant culture of the momentum or perish (Hughes, 1987).<\/p>\n<p><strong>Making the invisible visible<\/strong><\/p>\n<p>Opacity was in the late 2010s often described as a core ethical challenge of the very design of AI (Burrell, 2016), on account of either intentional acts of creating obscurity with \u201csecret algorithms\u201d (Pasquale, 2015), unconceivable \u201cmath\u201d (O\u2019Neil, 2016) or permeating discursive power that concealed the interests of institutions and corporations (Zuboff, 2015). This is a core challenge that we may address here. As disparate as they may seem in their perception of the relation between culture and technological change, there is, respectively, in applied computer ethics, STS and cultural studies, a shared emphasis on the importance of making the invisible visible and explicating cultural components in order to effect change.<\/p>\n<p>James H. Moor considers the \u201cinvisibility factor\u201d [33], such as \u201cinvisible programming values\u201d [34], a principal ethical challenge of the computer and its use <em>per se<\/em>. Collins [35] explains the move of the taken-for-granted cultural skills from inexplicable to explicable categories as a way, among others, to reduce ambiguity in knowledge and practice due to cultural and contextual distance. Hughes [36] takes a grander view when looking at the consolidation in society of larger technological systems, arguing that they do have a direction, and therefore the explication of goals is more important for a young system than for an old one.<\/p>\n<p>In cultural studies and critical data studies, the explication of cultural components is coupled with the exposure of cultural power dynamics. For example, a distinct field of feminist technoscience scholars, such as Judith Butler, Donna Harraway and Sandra Harding, have raised feminist critiques of science, technology, practices and knowledge in terms of the cultural gender power dynamics they reproduce and enforce (\u00c5sberg and Lykke, 2010). Likewise, the data scientists and feminists Catherine D\u2019Ignazio and Laura F. Klein (2020) describe what they refer to as \u201coppressive\u201d data science cultures in their book <em>Data feminism<\/em>. These, they argue, are reflected in the goals and priorities set for the very data design of the technology. For example when minority groups are either underrepresented in data used as the basis for decisions made on social benefits, when scientific critical medical analysis only benefits one privileged group, or on the other hand when a minority group is overrepresented in data that puts them at a disadvantage in society, such as <em>e.g.<\/em>, data from specific city zones used for predictive policing. In these perspectives, the various meaning making cultural practices and shared taken-for-granted cultural systems that naturalize specific situated views of the world and enforce power dynamics can only be challenged if explicated.<\/p>\n<p>In other words, the cultural foundation of a technological system, what we have also referred to here as its \u201cshape\u201d, the \u201cknowledge culture\u201d behind, its \u201ctechnological style\u201d, we may also see as a prioritization of inherent values of a cultural system. In an applied ethics perspective, values are, for example, described by the philosopher of technology Philip Brey as \u201cidealized qualities or conditions in the world that people find good\u201d [37]. These are ideals that we can work towards realizing in the design of a computer technology. Thus, technologies can have a specific cultural shape that consists of the implicit systems of organized knowledge, practices and meanings that go into their design. Values are not just personal ideals or transcendentally \u201ctrue\u201d or \u201cgood\u201d; they are culturally situated and constantly engage with shared cultural purposes and common meanings by enforcement and\/or negotiation (Williams, 1993). This is also true for our ethical thinking about digital technologies where culture, as for example Charles Ess has illustrated in his analyses of ethics, culture and technologies, plays an essential role. Accordingly, in Western societies an ethical emphasis has been placed on \u201cthe individual as the primary agent of ethical reflection and action, especially as reinforced by Western notions of individual rights\u201d [38]. As such, culturally situated ethical thinking also has an interest in the power dynamics of society regarding who or what ethics is for.<\/p>\n<p>In this line of argument, a first step to guide the development of trustworthy AI would be to make the cultural foundation (the data cultures) visible. Essentially we need to consider this explication of cultural components as an ethical and moral choice. As the information studies scholars Geoffrey C. Bowker and Susan Leigh Star [39] state in their work on classifications and standards in the development of information infrastructures, \u201cEach standard and each category valorizes some point of view and silences another. This is not inherently a bad thing \u2014 indeed it is inescapable. But it is an ethical choice, and as such it is dangerous \u2014 not bad, but dangerous.\u201d<\/p>\n<p><strong>Data interest analysis: The European agenda\u2019s cultural shape of AI<\/strong><\/p>\n<p>I have so far examined how a European AI Agenda evolved over a two-year period into a distinctive European cultural positioning with an emphasis on \u201cethical technologies\u201d and \u201ctrustworthy AI\u201d [40]. First and foremost, I examined this as an interest in shaping a technological AI momentum. In this last part of the article, I move on to an investigation of four cultural components of this cultural interest in the data of AI as it was explicated in an institutionally framed process between 2018\u20132019.<\/p>\n<p><em><strong>The four cultural components of the European data interest in AI<\/strong><\/em><\/p>\n<p>As illustrated in the two-year period, a negotiation of a shared cultural framework for the development and adoption of AI took place and was above all broadly defined in terms of European values and ethics. Importantly, this also included a conceptualization of a European technology culture. I propose here that the European AI agenda sought to explicate this in four cultural components: 1. the cultural context, 2. the cultural foundation, 3. the technological data culture and 4. the cultural data space.<\/p>\n<p><em>1. The cultural context: Defining the technological momentum<\/em><\/p>\n<p>As we have learned, a technological system does not evolve autonomously, it is directed within a momentum that arises from the interests invested in the system (Hughes, 1983). The culture of a larger technological system is internal to the system in the sense that it represents the sum of the focused interests and forces at play in the momentum of this particular system. But culture is also a force external to the very system, a \u201ccultural ambience\u201d that is entangled in general social and political affairs [41]. Transformations in the resources, skills and knowledge that drive the development and adoption of a technological system can therefore also be influenced by changes in this \u201ccultural ambience\u201d.<\/p>\n<p>Although AI systems had already been adopted and integrated primarily in some parts of the private sector in the late 2010s, their general adoption in European society, including in the public sector was a recent development, and in policymaking, the forceful focus on AI was new. Therefore, we may consider AI in terms of what Hughes (1983) refers to as a \u201cyoung system\u201d in society in which the explication of goals is particularly prevalent. Along these lines, the European AI agenda may equally be considered a cultural interest in shaping the technological momentum of AI systems and directing their evolution in society in Europe and globally. The high-level group\u2019s policy and investment recommendations (HLEG B, 2019) published one year into the period in which the European AI agenda unfolded describe the different societal phases of digitalization where AI forms a \u201cthird wave\u201d characterized by its adoption in European society: \u201cEurope is entering the third wave of digitalization, but the adoption of AI technologies is still in its infancy. The first wave involved primarily connection and networking technology adoption, while the second wave was driven by the age of big data. The third wave is characterized by the adoption of AI which, on average, could boost growth in European economic activity by close to 20 percent by 2030. In turn, this will create a foundation for a higher quality of life, new employment opportunities, better services, as well as new and more sustainable business models and opportunities\u201d [42].<\/p>\n<p>The two-year period was characterized by a sense of urgency to gain force within a global AI momentum, and in particular the stakeholders that make a momentum were therefore a central topic of the negotiation and debate. This included, for example, a focus on AI practitioners, entrepreneurs, data analysts, educators, the work force, policy-makers and citizens in general. Not only were the stakeholder interests of the members of the high-level expert group a continuous topic of contestation in public debate, but generally a broad range of societal stakeholders were either sought out to participate in, for example, the AI alliance multistakeholder online platform created as part of the strategy and the public consultations of the high-level expert group reports, or they were addressed in the content of the reports and in presentations at various public events.<\/p>\n<p>The depiction of an AI momentum was prevalent at the first public event that the high-level expert group was invited to attend [43]. Launched with a press release emphasizing the role of AI in \u201cboosting European competitiveness\u201d [44], the event started off with a speech by the then Commissioner for European Commission for Digital Economy and Society Maryia Gabriel who outlined the strategic goals of the European Commission with a clear message. European stakeholders could indeed shape the direction of AI: \u201cWe all have an important role to play in defining a shared European vision for Artificial Intelligence. Yes, ladies and gentlemen, digitalization is everywhere, data is everywhere. This is just the beginning of a new technological revolution.\u201d [45]<\/p>\n<p>Notably, like many others in this period, Gabriel in her speech also equated digitalization with data and at the same time described data as the foundation for AI evolution. In fact, the first European Commission Communication on AI had recognized data as a key factor for the development of AI in Europe with a reference to the creation of \u201cdata rich environments\u201d as \u201cAI needs vast amounts of data to be developed\u201d (European Commission A, 2018). Thus, the main driver for AI was held to be data. As described in the high-level expert group\u2019s policy and investment recommendations, this \u201cthird wave\u201d of technological development was in fact \u201cdriven by the age of big data\u201d. The EU was here consequently also described as \u201ca pivotal player in the data economy\u201d [46] as data \u201cis an indispensable raw material for developing AI\u201d [47]. Therefore, data was also held to be core to what the stakeholder interests of the AI momentum were invested in: \u201cEnsuring that individuals and societies, industry, the public sector as well as research and academia in Europe can benefit from this strategic resource is critical, as the overwhelming majority of recent advances in AI stem from deep learning on big data\u201d [48].<\/p>\n<p><em>2. The cultural foundation: The values and ethics framework<\/em><\/p>\n<p>Culture is a shared conceptual framework for meaning production that consists of what we know and what we are trained in. A conceptual values-based framework for personal data is, in the European context for example, formalized in legal frameworks, such as the General Data Protection Regulation and the Charter of Fundamental Rights. But culture also consists of the new meanings that are offered and contested meanings (Williams, 1993); that is to say, culture is also a cultural negotiation in which different cultures clash and conflicts of interests may emerge.<\/p>\n<p>With the rise of data intensive technologies, such as AI, not only the law but also the meaning of a traditional European approach to handling personal data was challenged, and a process of cultural meaning negotiation was therefore initiated. This we may refer to as \u201cdata ethics spaces of negotiation\u201d (Hasselbalch, 2019) that exposed the cultural contexts that were shaping the ethical thinking of this period and ultimately sought to resolve conflicts between value systems in conflict.<\/p>\n<p>As described, the European AI agenda explicated a general human-centric approach that stressed that human interest prevailed over other interests as well as a particular approach to data governance that emphasized the empowerment of individuals in the handling of their personal data. For example, the high-level expert group\u2019s ethics guidelines outlined a clear framework for the management of data with one of the seven requirements, \u201cprivacy and data governance\u201d, specifically addressing the human-centric values embedded in the data design of an AI technology. In this context, the concept of human agency stood out as the individual\u2019s knowledge and the information provided for the individual to make decisions and challenge automatic systems [49].<\/p>\n<p>The human-centric approach came to represent the overarching framework of the European AI agenda for resolving different interests and values embedded in AI innovation. Conflicts existed between data protection\/privacy, ethics and data-driven innovation, machine automation and the human work force, the interests of the individual and society\/public institutions as well as scientific and governmental interests. That is most important as stated in the policy and investment recommendations: \u201cAI is not an end in itself, but a means to enhance human well-being and flourishing\u201d [50]. As a result, human-centric practical solutions for resolving such conflicts were suggested: ethical technology as a competitive advantage (resolving conflicts between ethics and data-driven innovation), humans-in-the-loop AI solutions for the work place and upscaling the AI skills of the work force (resolving conflicts between automation and the replacement of workers), data design as an enabler of human well-being and protection such as developing mechanisms for the protection of personal data and individuals to control and be empowered by their data (the interests of the individual and society) and generally focusing on the use of non-personal data in business to business (B2B) AI solutions rather than the personal data of B2C solutions (resolving conflicts between risks of using personal data and the data intensity of AI technology development).<\/p>\n<p><em>3. The technological data culture: Skills, knowledge, style and resources<\/em><\/p>\n<p>Technological development is not neutral. Engineers and designers develop technologies within shared knowledge cultures that form the foundation for their work. These foundational cultural frameworks can be described as \u201ctechnology cultures\u201d \u2014 shared fields of resources, implicit and\/or explicit skills, experiences, methods and even tools they use when they build technologies and that therefore also contribute to the shaping of technological development.<\/p>\n<p>As described previously, during the process of developing the European AI agenda, the explication of a European \u201ctechnological culture\u201d for the development of AI became an essential focal point. In this respect, the skills, the education, the methods and practices needed in the developmental phase of what was referred to as \u201cethical technology\u201d was core to discussions concerning economic investments, awareness raising and policies. In fact, as previously illustrated, a European ethical design culture grew into being as the European position of the global AI momentum.<\/p>\n<p>In 2019 at the first AI Alliance assembly in Brussels, Commissioner Maryia Gabriel talked about getting the \u201cpolicy right\u201d, which meant adopting and developing AI with \u201ca decisive, yes, but\u201d as she said. This \u201cbut\u201d was a reference to the European risk mitigation of the ethical challenges of AI [51]. The first, she mentioned, was global competition (<em>e.g.<\/em>, that the EU was several billion euros behind in terms of investments in AI), the third was \u201cethical and legal concerns\u201d. In between was the second challenge, which according to Gabriel was the social impact of AI. Her suggestion was to invest in education and training, digital education plans and developing digital skills in Europe.<\/p>\n<p>The European strategic investment in a particular \u201ctechnology culture\u201d of AI was also an essential focus of the high-level expert group\u2019s policy and investment recommendations. It first and foremost came to equate a shared foundational AI knowledge culture. Europe needed to \u201cfoster understanding\u201d and \u201ccreativity\u201d [52] and generally \u201cempower humans by increasing knowledge and awareness of AI\u201d [53]. In this way, an entire section of the recommendations focused on \u201cgenerating appropriate skills and education for AI.\u201d This was not just limited to technical skills, but also \u201csocio-cultural skills\u201d [54]. In general, there was a key focus on the development of new skills or the updating of skills of not just engineers but also policy-makers and the general work force. This was also extended with a call to develop basic education on AI and literacy in higher and lower education and an \u201cAI competence framework for individuals\u201d [55]. In many instances, the term \u201cdata literacy\u201d was here used interchangeably with the concept \u201cdigital literacy\u201d. Markedly, the public sector was described as playing a fundamental role in the development of a Trustworthy AI \u201ctechnology culture\u201d, <em>e.g.<\/em>, by fostering \u201cresponsible innovation\u201d through public procurement.<\/p>\n<p>One thing was the assumption that a particularly European \u201ctechnology culture\u201d of AI was needed for Europe to succeed in global competition. But how was this \u201ctechnology culture\u201d then explicated? Here, the assessment list of the ethics guidelines was particularly interesting as it explicated in detail concrete questions to guide the design, management and development of AI within each of the seven requirements for trustworthy AI. We saw here explained a \u201cdata culture\u201d for AI, most explicitly in section 3 on \u201cPrivacy and data governance\u201d. The point of departure was privacy and data protection, and thereafter we moved on to ensuring the quality and integrity of data and procedures for managing access to data.<\/p>\n<p><em>4. The cultural data space: The infrastructure<\/em><\/p>\n<p>According to Hughes (1983), technological style differs from region to region and nation to nation. He equates culture with geographically and jurisdictionally delineated spaces. But we may also add to this depiction the technological evolution of space that has challenged this very correlation between culture, geography and jurisdiction. As a consequence, culture is no longer just the asset of a nation, rooted in geography and national law, but is increasingly extended into virtual communities with \u201ccultures\u201d or \u201csubcultures\u201d delineated by symbolic borders of cultural values and ideas. At the beginning of the twenty-first century, \u201cdata cultures\u201d had been created on the basis of an interjurisdictional digital flow of data. As such, the very \u201carchitecture\u201d of a global data infrastructure had emerged as an interjurisdictional space challenging first and foremost European data protection\/privacy values and legal frameworks. For example, looking at case law of the European Court of Human Rights (ECHR), it very early started considering the level of uncertainty that the challenges of technological progress posed to the ECHR\u2019s territorial definition of jurisdiction in cases concerning the right to privacy [56].<\/p>\n<p>In the 2010s, AI was developed primarily on the basis of an interjurisdictional and territorial global big data infrastructure. However, the revelations of embedded data asymmetries in the form of surveillance scandals, fake news and voter manipulation had provoked a European concern with foreign \u201cdata cultures\u201d and their \u201cdata architectures\u201d. The European AI agenda proposed an alternative European data-sharing infrastructure for AI based on a foundational values-based approach to data but which was also confined within the European jurisdiction and geographical space. In the policy and investment recommendations published, the high-level group described data infrastructures as the \u201cbasic building blocks of a society supported by AI technologies\u201d. Data infrastructures were described as the foundation of a European AI critical public infrastructure, and therefore should be treated as such: \u201cConsider European data-sharing infrastructures as public utility infrastructures.\u201d Thus, the development of this European space should also be invested with a specific set of values and designed \u201cwith due consideration for privacy, inclusion and accessibility, by design\u201d [57].<\/p>\n<p>It was particularly in this description of a European AI data infrastructure and architecture that the cultural interest in data stood out. Thus, the values-based approach was also conceived of as a cultural effort to transfer European values into technological development, positioned against a \u201cnon-European\u201d threat perceived to be pervasively embedded in technological infrastructures: \u201cDigital dependency on non-European providers and the lack of a well-performing cloud infrastructure respecting European norms and values may bear risks regarding macroeconomic, economic and security policy considerations, putting datasets and IP at risk, stifling innovation and commercial development of hardware and computer infrastructure for connected devices (IoT) in Europe\u201d [58].<\/p>\n<p><strong>Conclusion<\/strong><\/p>\n<p>As wild and unruly as it may seem, construed in a hodgepodge of complex relations, interests, symbolic meaning-making, people and artefacts, a technological momentum also has a shape \u2014 a shape that guides its direction, values, knowledge, resources and skills that form its technological architecture and its governance, adoption and reception in society. At times, this shape is more explicitly \u201ccultural\u201d and values-oriented than others when it is large and socially and culturally transformative for example or when it spreads on a global scale.<\/p>\n<p>The global AI momentum of the 2010s was a moment like that. Big data systems empowered by AI technologies were transforming European societies, challenging what was held to be European fundamental values, and driving out an explication of what it means to do AI in the \u201cEuropean way\u201d. With which values should AI be designed? Which interests should drive the development? What skills and education? What role should technology and science play in society? And could Europe even compete on those grounds? Information policy approaches were transforming from having narrow functional focuses on the digitalization of \u201ceverything\u201d to more complex and multifaceted values-based emphases on the ethical and social implications of data technologies, including everything from legislative measures in competition, data protection, criminal and consumer protection law to research and innovation investments in \u201cethical technology\u201d development and a European datasharing infrastructure.<\/p>\n<p>In 2018, Europe had been going through a period of self-exploration regarding the role of big data and emerging technologies in European societies in general. Following an all-encompassing digitalization wave, the social and ethical implications were materializing with big data scandals and revelations. A recent reform of the European data protection legal framework was presented as Europe\u2019s powerful global response to these challenges. However, a law did not seem to be a sufficient governance response by itself, and therefore a process was initiated to develop what was referred to as a European approach to what was perceived as a general AI evolution of the age of big data.<\/p>\n<p>In this article, I have investigated a cultural interest in the global AI momentum\u2014the cultural shape it took with an emphasis on \u201cethical technology\u201d and \u201ctrustworthy AI\u201d in response to global AI innovation, how it evolved in a process of public events and a high-level expert group on AI established by the European Commission and how this cultural interest took form as a data interest that was sought to be explicated in policy and investment recommendations as well as in a set of ethics guidelines. I relied on the thesis that technological development is not neutral. This also means that the culture of a technological design is not a randomly adapted technological style. It is one sum of interests, value frameworks and the negotiations of these, and if these are made visible, we can argue that the technological development of society can be shaped and chosen. The choice to do AI ethically and responsibly is not a simple one; in fact, it is as complex as the culture we are trying to shape with it.<\/p>\n<p>I here want to suggest that to direct the AI momentum of the age of big data, we need an ethics concerned with the embedded data interests and powers, what I have also referred to as a \u201cdata ethics of power\u201d (Hasselbalch, 2019). I have previously (Hasselbalch, 2019) described how European policy and decision-makers in the late 2010s were positioning themselves against a threat to European values and ethics perceived to be embedded in the big data socio-technical systems of what was named \u201cGAFA\u201d (acronym for the four big U.S. tech companies Google, Apple, Facebook, Amazon). Considering this \u201ccultural ambience\u201d (Collins, 1987), I propose that the cultural positioning of the European AI agenda may also be viewed as a data ethical choice formulated in direct response to the technological data cultures of the dominant AI technologies at that time.<\/p>\n<p>In the article I focused primarily on the institutional explication of European cultural values as an interest in a technological momentum. I did not seek to predict the actual adoption and implementation of AI in Europe. Nevertheless, a few considerations and recommendations can be made concerning the implementation of a European \u201cthird way\u201d on the global arena with an emphasis on the development of trustworthy AI and the human-centric approach. Following the two-year period that I examined in this article, the European Commission in 2020 published a comprehensive strategy on the digital, AI and data future of Europe (E, F, G). While this strategy was ambitious with respect to furthering Europe\u2019s position in the \u201cglobal AI race\u201d by advancing a general AI uptake and taking back control of a European data resource space, the values-based European \u201cthird way\u201d was mostly addressed in a legal compliance and requirements framework (Hasselbalch, 2020). Based on this article\u2019s delineation of the complex factors that constitute an ethical \u201cdata culture\u201d, we may argue that this is not enough and that an additional set of combined governance tools is needed to shape the technological AI momentum\u2019s data cultures as \u201ctrustworthy\u201d. For example, we need investment, innovation, research and education in \u201cethical technology\u201d components and processes in specific (from human-in-the-loop features, state of the art anonymization techniques to ethical impact assessments). The ethical component of AI implementation in Europe cannot just be ancillary to the European AI uptake; an ethical data culture for Europe needs a dedicated economic, social and political investment.<\/p>\n<p><strong>Notes<\/strong><\/p>\n<p>1. The definition of artificial intelligence has changed throughout history since the 1950s with the development of different scientific and social paradigms. As such, in the 2010s, the term AI still did not have one shared signification. In this article, I do not consider the \u201cintelligence\u201d of AI (technologically or philosophically), but use the term AI generically to address public discourse on the topic. My emphasis is on AI as automated decision-making data intensive systems that are designed to perceive their environment through acquiring data and interpreting the data to decide action to achieve a goal (HLEG A, 2019, p. 36).<\/p>\n<p>2. The following examples are from the Berlin-based NGO AlgorithmWatch\u2019s report published in 2019 that takes stock of Automated Decision-Making (ADMs) in the EU. Retrieved from <a href=\"https:\/\/algorithmwatch.org\/en\/publication\/automating-society-available-now\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/algorithmwatch.org\/en\/publication\/automating-society-available-now\/<\/a>.<\/p>\n<p>3. HLEG C, p. 4.<\/p>\n<p>4. HLEG C, p. 2.<\/p>\n<p>5. HLEG C, p. 5.<\/p>\n<p>6. HLEG A, p. 4.<\/p>\n<p>7. HLEG A, p. 9.<\/p>\n<p>8<a href=\"https:\/\/firstmonday.org\/ojs\/index.php\/fm\/article\/download\/10861\/10010?inline=1#8a\">.<\/a> Upon suggestion from the author of this paper and based on a conversation with Sille Obelitz S\u00f8e. Argument for this revision can be found here: <a href=\"https:\/\/dataethics.eu\/why-trust-in-ai-is-not-enough\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/dataethics.eu\/why-trust-in-ai-is-not-enough\/<\/a>.<\/p>\n<p>9. HLEG D, p. 2.<\/p>\n<p>10. Smuha, 2019, p. 104.<\/p>\n<p>11. <em>Ibid.<\/em><\/p>\n<p>12. Von der Leyen, 2019, p. 13.<\/p>\n<p>13. Moor, 1985, p. 267.<\/p>\n<p>14. Moor, 1985, p. 271.<\/p>\n<p>15. Hughes, 1983, pp. 106\u2013139.<\/p>\n<p>16. Hughes, 1983, p. 107.<\/p>\n<p>17. Hughes, 1987, p. 51.<\/p>\n<p>18. Williams, 1993, p. 6.<\/p>\n<p>19. Williams, 1993, p. 8.<\/p>\n<p>20. Williams, 1993, p. 6.<\/p>\n<p>21. <em>Ibid.<\/em><\/p>\n<p>22. Hughes, 1983, p. 15.<\/p>\n<p>23. Hughes, 1987, p. 198.<\/p>\n<p>24. Hughes, 1987, pp. 76\u201377.<\/p>\n<p>25. Hughes, 1987, p. 69.<\/p>\n<p>26. <em>Ibid.<\/em><\/p>\n<p>27. Pickering, 1992, pp. 3\u20134.<\/p>\n<p>28. Collins, 1987, p. 344.<\/p>\n<p>29. Collins, 1987, p. 338.<\/p>\n<p>30. Collins, 1987, p. 344.<\/p>\n<p>31. Mayer-Schoenberger and Cukier, 2013, p. 98\u2013122.<\/p>\n<p>32. Acker and Clement, 2019, p. 3.<\/p>\n<p>33. Moor, 1985, p. 272.<\/p>\n<p>34. Moor, 1985, p. 273.<\/p>\n<p>35. Collins, 1987, p. 343.<\/p>\n<p>36. Hughes, 1987, p. 15.<\/p>\n<p>37. Brey, 2010, p. 46.<\/p>\n<p>38. Ess, 2013, p. 196.<\/p>\n<p>39. Bowker and Star, 1999, p. 15.<\/p>\n<p>40. I examine the European AI strategy (described in the two communications \u201cArtificial intelligence for Europe\u201d and \u201cBuilding trust in human-centric artificial intelligence\u201d (April 2019), in the \u201cDeclaration of cooperation on artificial intelligence\u201d (April 2018) and the \u201cCoordinated plan on artificial intelligence \u2018made in Europe\u2019\u201d (December 2018)) with a core focus on the work of the European high-level group on AI and the two core deliverables of this group: the \u201cEthics guidelines for trustworthy AI\u201d (April 2019) and the \u201cPolicy and investment recommendations for trustworthy AI\u201d (June 2019). The investigation is based on a qualitative reading of these documents and includes perspectives from the process of the very development of these two documents (with reference to public minutes and records from meetings) as well as from concurrent European policy responses.<\/p>\n<p>41. Collins, 1987, p. 344.<\/p>\n<p>42. HLEG B, 2019, pp. 6\u20137.<\/p>\n<p>43. The AI forum held in Helsinki in October 2018, co-hosted by the Ministry of Economic Affairs and Employment of Finland and the European Commission.<\/p>\n<p>44. Finnish Ministry of Economic Affairs and Employment, at <a href=\"https:\/\/tem.fi\/en\/-\/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/tem.fi\/en\/-\/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya<\/a>.<\/p>\n<p>45. Speech retrieved from <a href=\"https:\/\/www.tekoalyaika.fi\/en\/ai-forum-2018\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.tekoalyaika.fi\/en\/ai-forum-2018\/<\/a>.<\/p>\n<p>46. HLEG B, 2019, p. 16.<\/p>\n<p>47. HLEG B, 2019, p. 28.<\/p>\n<p>48. <em>Ibid.<\/em><\/p>\n<p>49. HLEG B, 2019, p. 16.<\/p>\n<p>50. HLEG B, 2019, p. 9.<\/p>\n<p>51<a href=\"https:\/\/firstmonday.org\/ojs\/index.php\/fm\/article\/download\/10861\/10010?inline=1#51a\">.<\/a> Speech retrieved from <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/first-european-aialliance-assembly\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/first-european-aialliance-assembly<\/a>.<\/p>\n<p>52. HLEG B, 2019, p. 9.<\/p>\n<p>53. HLEG B, 2019, p. 10.<\/p>\n<p>54. HLEG B, 2019, p. 32.<\/p>\n<p>55. HLEG B, 2019, p. 10.<\/p>\n<p>56. See an analysis with key case law references at <a href=\"https:\/\/mediamocracy.files.wordpress.com\/2010\/05\/privacy-and-jurisdiction-in-the-network-society.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/mediamocracy.files.wordpress.com\/2010\/05\/privacy-and-jurisdiction-in-the-network-society.pdf<\/a>.<\/p>\n<p><a name=\"57\"><\/a>57. HLEG B, 2019, p. 28.<\/p>\n<p>58. HLEG B, 2019, p. 3.<\/p>\n<p><strong>References<\/strong><\/p>\n<p>A. Acker and T. Clement, 2019. \u201cData cultures, culture as data \u2014 Special issue of cultural analytics,\u201d <em>Journal of Cultural Analytics<\/em> (10 April).<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.22148\/16.035\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.22148\/16.035<\/a>, accessed 12 November 2020.<\/p>\n<p>E. Alpaydin, 2016. <em>Machine learning: The new AI<\/em>. Cambridge, Mass.: MIT Press.<\/p>\n<p>J. Angwin, J. Larson, S. Mattu and L. Kirchner, 2016. \u201cMachine bias \u2014 There\u2019s software used across the country to predict future criminals. And it\u2019s biased against blacks,\u201d <em>ProPublica<\/em> (23 May), at <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing<\/a>, accessed 12 November 2020.<\/p>\n<p>G.C. Bowker and S.L. Star, 1999. <em>Sorting things out: Classification and its consequences<\/em>. Cambridge, Mass: MIT Press.<\/p>\n<p>P. Brey, 2010. \u201cValues in technology and disclosive ethics,\u201d In: L. Floridi (editor). <em>Cambridge handbook of information and computer ethics<\/em>. Cambridge: Cambridge University Press, pp. 41\u201358.<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.1017\/CBO9780511845239.004\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.1017\/CBO9780511845239.004<\/a>, accessed 12 November 2020.<\/p>\n<p>J. Burrell, 2016. \u201cHow the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms,\u201d <em>Big Data &amp; Society<\/em>, volume 3, number 1 (6 January).<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.1177\/2053951715622512\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.1177\/2053951715622512<\/a>, accessed 12 November 2020.<\/p>\n<p>M. Callon and B. Latour, 1992. \u201cDon\u2019t throw the baby out with the bath school! A reply to Collins and Yearley,\u201d In: A. Pickering (editor). <em>Science as practice and culture<\/em>. Chicago: University of Chicago Press, pp. 343\u2013368.<\/p>\n<p>H.M. Collins, 1987. \u201cExpert systems and the science of knowledge,\u201d In: W.E. Bijker, T.P. Hughes and T. Pinch (editors). <em>The social construction of technological systems: New directions in the sociology and history of technology<\/em>. Cambridge, Mass.: MIT Press, pp. 51\u201382.<\/p>\n<p>H.M. Collins and S. Yearley, 1992. \u201cEpistemological chicken,\u201d In: A. Pickering (editor). <em>Science as practice and culture<\/em>. Chicago: University of Chicago Press, pp. 301\u2013326.<\/p>\n<p>C. D\u2019Ignazio and L.F. Klein, 2020. <em>Data feminism<\/em>. Cambridge, Mass.: MIT Press.<\/p>\n<p>D. Epstein, C. Katzenbach and F. Musiani, 2016. \u201cDoing Internet governance: Practices, controversies, infrastructures, and institutions,\u201d <em>Internet Policy Review<\/em>, volume 5, number 3 (30 September).<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.14763\/2016.3.435\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.14763\/2016.3.435<\/a>, accessed 12 November 2020.<\/p>\n<p>C. Ess, 2013. <em>Digital media ethics<\/em>. Cambridge: Polity Press.<\/p>\n<p>G. Hasselbalch, 2020. \u201cEU\u2019s digital, AI and data strategy lacks ambition on ethics and trustworthy AI\u201d (21 February), at <a href=\"https:\/\/dataethics.eu\/eus-digital-ai-and-data-strategy\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/dataethics.eu\/eus-digital-ai-and-data-strategy\/<\/a>, accessed 12 November 2020.<\/p>\n<p>G. Hasselbalch, 2019. \u201cMaking sense of data ethics. The powers behind the data ethics debate in European policymaking,\u201d <em>Internet Policy Review<\/em>, volume 8, number 2 (13 June).<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.14763\/2019.2.1401\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.14763\/2019.2.1401<\/a>, accessed 12 November 2020.<\/p>\n<p>T.P. Hughes, 1987. \u201cThe evolution of large technological systems,\u201d In: W.E. Bijker, T.P. Hughes and T. Pinch (editors). <em>The social construction of technological systems: New directions in the sociology and history of technology<\/em>. Cambridge, Mass.: MIT Press, pp. pp. 51\u201382.<\/p>\n<p>T.P. Hughes, 1983. <em>Networks of power: Electrification in Western society, 1880\u20131930<\/em>. Baltimore, Md.: Johns Hopkins University Press.<\/p>\n<p>N. Kobie, 2018. \u201cThe complicated truth about China\u2019s social credit system,\u201c <em>Wired<\/em> (7 June), at <a href=\"https:\/\/www.wired.co.uk\/article\/china-social-credit-system-explained\">https:\/\/www.wired.co.uk\/article\/china-social-credit-system-explained<\/a>, accessed 12 November 2020.<\/p>\n<p>V. Mayer-Schonberger and K. Cukier, 2013. <em>Big data: A revolution that will transform how we live, work and think<\/em>. London: John Murray.<\/p>\n<p>F. Merz, 2019. \u201cEurope and the global AI race,\u201d <em>CSS analyses in security policy<\/em>, number 247, at <a href=\"https:\/\/css.ethz.ch\/content\/dam\/ethz\/special-interest\/gess\/cis\/center-for-securities-studies\/pdfs\/CSSAnalyse247-EN.pdf\">https:\/\/css.ethz.ch\/content\/dam\/ethz\/special-interest\/gess\/cis\/center-for-securities-studies\/pdfs\/CSSAnalyse247-EN.pdf<\/a>, accessed 12 November 2020.<\/p>\n<p>J.H. Moor, 1985. \u201cWhat is computer ethics?\u201d <em>Metaphilosophy<\/em>, volume 16, number 4, pp. 266\u2013275.<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.1111\/j.1467-9973.1985.tb00173.x\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.1111\/j.1467-9973.1985.tb00173.x<\/a>, accessed 12 November 2020.<\/p>\n<p>C. O\u2019Neil, 2016. <em>Weapons of math destruction: How big data increases inequality and threatens democracy<\/em>. London: Penguin Books.<\/p>\n<p>F. Pasquale, 2015. <em>The black box society: The secret algorithms that control money and information<\/em>. Cambridge, Mass.: Harvard University Press.<\/p>\n<p>A. Pickering, 1992. \u201cFrom science as knowledge to science as practice,\u201d In: A. Pickering (editor). <em>Science as practice and culture<\/em>. Chicago: University of Chicago Press, pp. 1\u201326.<\/p>\n<p>M. Spielkamp (editor), 2019. \u201cAutomating society: Taking stock of automated decision-making in the EU,\u201d at <a href=\"https:\/\/algorithmwatch.org\/wp-content\/uploads\/2019\/01\/Automating_Society_Report_2019.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/algorithmwatch.org\/wp-content\/uploads\/2019\/01\/Automating_Society_Report_2019.pdf<\/a>, accessed 12 November 2020.<\/p>\n<p>N.A. Smuha, 2019. \u201cThe EU approach to ethics guidelines for trustworthy artificial intelligence,\u201d <em>Computer Law Review International<\/em>, 20(4), pp. 97\u2013106.<\/p>\n<p>R. Williams, 1993. \u201cCulture is ordinary,\u201d In: A. Gray and J. McGuigan (editors). <em>Studying culture: An introductory reader<\/em>. London: Edward Arnold, pp. 5\u201314.<\/p>\n<p>S. Zuboff, 2014. \u201cA digital declaration,\u201d <em>Frankfurter Allgemeine<\/em> (9 September), at <a href=\"https:\/\/www.faz.net\/aktuell\/feuilleton\/debatten\/the-digital-debate\/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.faz.net\/aktuell\/feuilleton\/debatten\/the-digital-debate\/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html<\/a>, accessed 12 November 2020.<\/p>\n<p>C. \u00c5sberg and N. Lykke, 2010. \u201cFeminist technoscience studies,\u201d <em>European Journal of Women\u2019s Studies<\/em>, volume 17, number 4, pp. 299\u2013305.<br \/>\ndoi: <a href=\"https:\/\/doi.org\/10.1177\/1350506810377692\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/doi.org\/10.1177\/1350506810377692<\/a>, accessed 12 November 2020.<\/p>\n<p><em><strong>European AI agenda document<\/strong><\/em><\/p>\n<p>European Commission A, 2018. \u201cArtificial intelligence for Europe\u201d (25 April), at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/communication-artificial-intelligence-europe\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/communication-artificial-intelligence-europe<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission B, 2018. \u201cDeclaration of cooperation on artificial intelligence,\u201d at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/artificial-intelligence#Declaration-of-cooperation-on-Artificial-Intelligence\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/artificial-intelligence#Declaration-of-cooperation-on-Artificial-Intelligence<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission C, 2018. \u201cCoordinated plan on artificial intelligence \u2018made in Europe\u2019\u201d (7 December), at <a href=\"https:\/\/ec.europa.eu\/commission\/presscorner\/detail\/ro\/memo_18_6690\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/commission\/presscorner\/detail\/ro\/memo_18_6690<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission D, 2019. \u201cBuilding trust in human-centric artificial intelligence\u201d (9 April), at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/communication-building-trust-human-centric-artificial-intelligence\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/communication-building-trust-human-centric-artificial-intelligence<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission E, 2020. \u201cShaping Europe\u2019s digital future,\u201d at <a href=\"https:\/\/ec.europa.eu\/info\/strategy\/priorities-2019-2024\/europe-fit-digital-age\/shaping-europe-digital-future_en\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/info\/strategy\/priorities-2019-2024\/europe-fit-digital-age\/shaping-europe-digital-future_en<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission F, 2020. \u201cA European strategy for data,\u201d at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/european-strategy-data\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/european-strategy-data<\/a>, accessed 12 November 2020.<\/p>\n<p>European Commission G, 2020. \u201cWhite paper on artificial intelligence: A European approach to excellence and trust\u201d (18 February), at <a href=\"https:\/\/ec.europa.eu\/info\/publications\/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/info\/publications\/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en<\/a>, accessed 12 November 2020.<\/p>\n<p>Finnish Ministry of Economic Affairs and Employment, 2018. \u201cAI Forum 2018: Artificial intelligence to boost European competitiveness\u201d (13 September), at <a href=\"https:\/\/tem.fi\/en\/-\/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/tem.fi\/en\/-\/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya<\/a>, accessed 12 November 2020.<\/p>\n<p>HLEG A, High-Level Expert Group on Artificial Intelligence, 2019. \u201cEthics guidelines for trustworthy AI\u201d (8 April), at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai<\/a>, accessed 12 November 2020.<\/p>\n<p>HLEG B, High-Level Expert Group on Artificial Intelligence, 2019. \u201cPolicy and investment recommendations for trustworthy AI\u201d (26 June), at <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/policy-and-investment-recommendations-trustworthy-artificial-intelligence\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/policy-and-investment-recommendations-trustworthy-artificial-intelligence<\/a>, accessed 12 November 2020.<\/p>\n<p>HLEG C, High-Level Expert Group on Artificial Intelligence, 2018. \u201cMinutes of the first meeting\u201d (27 June), at <a href=\"https:\/\/ec.europa.eu\/transparency\/regexpert\/index.cfm?do=groupDetail.groupMeeting%20&amp;meetingId=5190\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/transparency\/regexpert\/index.cfm?do=groupDetail.groupMeeting%20&amp;meetingId=5190<\/a>, accessed 12 November 2020.<\/p>\n<p>HLEG D, High-Level Expert Group on Artificial Intelligence, 2018. \u201cReport of the AI HLEG workshop of 20 September 2018,\u201d at <a href=\"https:\/\/ec.europa.eu\/futurium\/en\/european-ai-alliance\/report-ai-hleg-workshop-2092018\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/futurium\/en\/european-ai-alliance\/report-ai-hleg-workshop-2092018<\/a>, accessed 12 November 2020.<\/p>\n<p>Organisation for Economic Co-operation and Development (OECD), 2019. \u201cRecommendation of the Council on Artificial Intelligence,\u201d at <a href=\"https:\/\/legalinstruments.oecd.org\/en\/instruments\/OECD-LEGAL-0449\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/legalinstruments.oecd.org\/en\/instruments\/OECD-LEGAL-0449<\/a>, accessed 12 November 2020.<\/p>\n<p>U. von der Leyen, 2019. \u201cA Union that strives for more. My agenda for Europe: Political guidelines for the next European Commission 2019\u20132024,\u201d at <a href=\"https:\/\/ec.europa.eu\/commission\/sites\/beta-political\/files\/political-guidelines-next-commission_en.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/commission\/sites\/beta-political\/files\/political-guidelines-next-commission_en.pdf<\/a>, accessed 12 November 2020.<\/p>\n<hr width=\"300\" \/>\n","protected":false},"excerpt":{"rendered":"<p>Abstract This article investigates a moment of the big data age in which artificial intelligence&#8230;<\/p>\n","protected":false},"author":7,"featured_media":8907,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[193,177,9,240],"tags":[],"class_list":["post-9030","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-analysis","category-democracy","category-latest-news","category-surveys"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank\" \/>\n<meta property=\"og:description\" content=\"Abstract This article investigates a moment of the big data age in which artificial intelligence...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/\" \/>\n<meta property=\"og:site_name\" content=\"Dataetisk T&aelig;nkehandletank\" \/>\n<meta property=\"article:published_time\" content=\"2020-12-01T17:40:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-12-01T17:40:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png\" \/>\n\t<meta property=\"og:image:width\" content=\"355\" \/>\n\t<meta property=\"og:image:height\" content=\"228\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Gry Hasselbalch\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@dataethicseu\" \/>\n<meta name=\"twitter:site\" content=\"@dataethicseu\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Gry Hasselbalch\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"48 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/\"},\"author\":{\"name\":\"Gry Hasselbalch\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#\\\/schema\\\/person\\\/d759a02278982241372632f25ec82854\"},\"headline\":\"Culture by design &#8211; a data interest analysis of the European AI policy agenda\",\"datePublished\":\"2020-12-01T17:40:21+00:00\",\"dateModified\":\"2020-12-01T17:40:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/\"},\"wordCount\":9673,\"publisher\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/Screenshot-2020-10-28-at-06.45.52.png\",\"articleSection\":[\"Analysis\",\"Democracy\",\"Latest News\",\"Surveys\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/\",\"url\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/\",\"name\":\"Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/Screenshot-2020-10-28-at-06.45.52.png\",\"datePublished\":\"2020-12-01T17:40:21+00:00\",\"dateModified\":\"2020-12-01T17:40:23+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/Screenshot-2020-10-28-at-06.45.52.png\",\"contentUrl\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/Screenshot-2020-10-28-at-06.45.52.png\",\"width\":355,\"height\":228},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Hjem\",\"item\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Culture by design &#8211; a data interest analysis of the European AI policy agenda\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#website\",\"url\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/\",\"name\":\"Dataetisk T\u00e6nkehandletank\",\"description\":\"Alt om data-etik\",\"publisher\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#organization\",\"name\":\"Data Ethics\",\"url\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/2021\\\/04\\\/logo_yellow.png\",\"contentUrl\":\"https:\\\/\\\/dataethics.eu\\\/wp-content\\\/uploads\\\/2021\\\/04\\\/logo_yellow.png\",\"width\":124,\"height\":151,\"caption\":\"Data Ethics\"},\"image\":{\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/dataethicseu\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/#\\\/schema\\\/person\\\/d759a02278982241372632f25ec82854\",\"name\":\"Gry Hasselbalch\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g\",\"caption\":\"Gry Hasselbalch\"},\"url\":\"https:\\\/\\\/dataethics.eu\\\/da\\\/author\\\/gry\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/","og_locale":"da_DK","og_type":"article","og_title":"Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank","og_description":"Abstract This article investigates a moment of the big data age in which artificial intelligence...","og_url":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/","og_site_name":"Dataetisk T&aelig;nkehandletank","article_published_time":"2020-12-01T17:40:21+00:00","article_modified_time":"2020-12-01T17:40:23+00:00","og_image":[{"width":355,"height":228,"url":"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png","type":"image\/png"}],"author":"Gry Hasselbalch","twitter_card":"summary_large_image","twitter_creator":"@dataethicseu","twitter_site":"@dataethicseu","twitter_misc":{"Skrevet af":"Gry Hasselbalch","Estimeret l\u00e6setid":"48 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#article","isPartOf":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/"},"author":{"name":"Gry Hasselbalch","@id":"https:\/\/dataethics.eu\/da\/#\/schema\/person\/d759a02278982241372632f25ec82854"},"headline":"Culture by design &#8211; a data interest analysis of the European AI policy agenda","datePublished":"2020-12-01T17:40:21+00:00","dateModified":"2020-12-01T17:40:23+00:00","mainEntityOfPage":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/"},"wordCount":9673,"publisher":{"@id":"https:\/\/dataethics.eu\/da\/#organization"},"image":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#primaryimage"},"thumbnailUrl":"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png","articleSection":["Analysis","Democracy","Latest News","Surveys"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/","url":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/","name":"Culture by design - a data interest analysis of the European AI policy agenda &#183; Dataetisk T&aelig;nkehandletank","isPartOf":{"@id":"https:\/\/dataethics.eu\/da\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#primaryimage"},"image":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#primaryimage"},"thumbnailUrl":"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png","datePublished":"2020-12-01T17:40:21+00:00","dateModified":"2020-12-01T17:40:23+00:00","breadcrumb":{"@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#primaryimage","url":"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png","contentUrl":"https:\/\/dataethics.eu\/wp-content\/uploads\/Screenshot-2020-10-28-at-06.45.52.png","width":355,"height":228},{"@type":"BreadcrumbList","@id":"https:\/\/dataethics.eu\/da\/culture-by-design-a-data-interest-analysis-of-the-european-ai-policy-agenda\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Hjem","item":"https:\/\/dataethics.eu\/da\/"},{"@type":"ListItem","position":2,"name":"Culture by design &#8211; a data interest analysis of the European AI policy agenda"}]},{"@type":"WebSite","@id":"https:\/\/dataethics.eu\/da\/#website","url":"https:\/\/dataethics.eu\/da\/","name":"Dataetisk T\u00e6nkehandletank","description":"Alt om data-etik","publisher":{"@id":"https:\/\/dataethics.eu\/da\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dataethics.eu\/da\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dataethics.eu\/da\/#organization","name":"Data Ethics","url":"https:\/\/dataethics.eu\/da\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dataethics.eu\/da\/#\/schema\/logo\/image\/","url":"https:\/\/dataethics.eu\/wp-content\/uploads\/2021\/04\/logo_yellow.png","contentUrl":"https:\/\/dataethics.eu\/wp-content\/uploads\/2021\/04\/logo_yellow.png","width":124,"height":151,"caption":"Data Ethics"},"image":{"@id":"https:\/\/dataethics.eu\/da\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/dataethicseu"]},{"@type":"Person","@id":"https:\/\/dataethics.eu\/da\/#\/schema\/person\/d759a02278982241372632f25ec82854","name":"Gry Hasselbalch","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/05449905bb4211b557928beae6c54e299bfe2a2bf54c01a2d2d0c37e3597b6ba?s=96&d=mm&r=g","caption":"Gry Hasselbalch"},"url":"https:\/\/dataethics.eu\/da\/author\/gry\/"}]}},"_links":{"self":[{"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/posts\/9030","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/comments?post=9030"}],"version-history":[{"count":5,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/posts\/9030\/revisions"}],"predecessor-version":[{"id":9036,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/posts\/9030\/revisions\/9036"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/media\/8907"}],"wp:attachment":[{"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/media?parent=9030"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/categories?post=9030"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dataethics.eu\/da\/wp-json\/wp\/v2\/tags?post=9030"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}