Skip links

Modern Alchemy: Solving the AI Mystery

Blog. “Algorithms you get me like no other”. A person is leaning into a hole in an otherwise empty plain wall. Outside is a big sky with billions of distant stars. “30 songs you didn’t know you loved yet”, continues the Spotify ad. A spot for your eyes to dwell and escape as you rumble through the dark sizzling summer hot tunnels, squeezed between other subway commuters.

The Spotify ad billboards you see in the New York subway these days tell you how mystic personalization algorithms will change your life. They are higher mystic beings that somehow know you better than you know yourself. They will foresee your future, transform it into something better (a starry boundless sky), you can “get lost” in their universe and escape from the plain wall of every day life. Their science is the alchemy of today. Creating gold out of rocks.

https://www.youtube.com/watch?v=rGLFVVFQCjA

Roland Barthes would have loved this. Apply a classic semiotic analysis to the Spotify ads and you have a full blown modern myth. In popular commercial culture, personalization algorithms and AI in general are described as the answers to all our contemporary human desires. The products we buy and services we use will know us so well that we will not need to make the  life decisions ourselves. They will help us and support us to fulfil the dreams we didn’t know we had. Even scientists and academics are talking about AI and algorithms as one and the same thing closely associated with a robo and AI science fiction universe. As either the super intelligent evolution of human kind or as our demise. You pick. Regulators and policymakers as well – grasping for instruments to regulate and constantly lacking behind a mystic unruly being in constant evolution.

Numb by obscurity
The mystery does not come from the inexplicable and mystic power of AI. It comes from the lack of transparency in the way in which data analytics systems are applied to and transforming our everyday lives. We are basically made numb by obscurity. Let’s just call it what it is. Artificial intelligence, personalization algorithms, cognitive computing, machine learning systems, what ever you choose to call it, are not science fiction, they are different types and forms of data analysis, data profiling and prediction. They learn by data and evolve by data. They are quantifiable and designed programmes. They don’t have a mystic access to people’s inner selves. They have gained access to our inner selves by retrieving and processing our very concrete digital data (location, communication, meta data etc.) when we use every day technologies and digital services.

And they are economy. We have an evolving type of economy based on designed systems that are trained by data to find patterns in data, create profiles, predict and respond to data – make meaning out of data and transform it into value. Some have called this the Algorithm Economy. Because it is the data analytics systems that are the actual value makers. They are the recipes of successful businesses and they are of course based on subjective assumptions, perhaps even biases, and interests – commercial, governmental, scientific and so on.

The industry giants of consumer tech knows this. They are all investing heavily in AI. Facebook, Google, Amazon, IBM, etc.. Opening new research departments, hiring AI, data analytics and machine learning staff in manyfold, and dedicating large quantities of their budgets in the area.

But the people the systems act on don’t. There is a total lack of transparency in the proprietary value making data analytics systems. They are deployed invisibly. People have no access to them and no understanding of them and they have no remedies, developers and new businesses have no support in designing data ethical systems, and regulators are grappling in the dark for regulatory accountability tools to match the complexity of the digital age.

An analytical approach to the data analytics economy
What we need is an analytical approach to the data analytics economy in order to empower regulators, data ethical businesses and people. A starting point is to emphasise the complexity of the new economy (not the obscurity).

As examples:

There are different types of artificial intelligence, some more prone to security risks, some more to ethical risks, some to the human existential risks, some have it all. “Narrow AI” or “weak AI” are focused on just doing one task and don’t attempt to replicate human cognitive abilities like the  “full AI”/”strong AI” systems such as IBMs Watson or Google’s Deepmind.

There are different areas of application of AI. Many do not involve personal data, but their possible security risks will still effect people. Artificial intelligent systems in air warfare is for example an area in steep development.

There are different types of algorithms, and different elements of their design that can be reviewed for ethical implications. For example what are the biases embedded in the very training data that evolve the machine learning systems? The concept of privacy impact asessements to assess the impacts of a process of data analysis’ effect on individuals’ privacy rights is emphasised in the EU data protection regulation. But in the complex reality of autonomous systems, what we need is a more all round assessment that also takes account of the broad ethical implications of applications of data analysis. Why not embed an ethical impact assessement in all applications of AI and algorithms that involves the personal data of people? Discern the embedded biases, potential discriminatory actions of a system, the power balance between the system/the data processer and the individual. etc.etc. Embed ethical considerations in the systems.

You may continue…

All of these new types of data science and analytics’ design processes can be discerned, categorized and subjected to ethical reviews. But we also need an all encompassing understanding of the streams of implications of the systems.

The developers and computer scientists understand the design processes better than anyone else, regulators understand the potential and limitations of regulation, people understand their own needs and limits, businesses understand the commercial reality, social scientists understand the societal and relational implications, philosophers understand the human condition and direction.

We are often arrogant by expertise and refuse to accept the understanding of other disciplines to help solve a problem that is not just technical, not just regulatory, not just social or economic, and definitely not just personal in nature. But in order to design adequate safeguards, regulatory tools, frameworks and data ethical systems, we need it all.