Skip links

Children Need to Understand (Un)Intelligent Machines

In the summer of 2017, I went to the ArtScience Museum in Singapore to see the exhibition “HUMAN+: The Future of our Species”. Especially, I wanted to see what in an explaining text at the museum was described as “one of the most realistic female humanoid social robots in the word”: A robot named Nadine stated to be socially intelligent and friendly, who “greets people she meets, makes eye contact and remembers all the nice chats she has had with people”.

I went to the museum at a quiet time, which meant that when I entered the room, in which Nadine was exhibited, I was all by myself. Nadine was sitting at a desk typing at its’ computer. When I came into the room, it looked up and greeted me, and we had a (human) face to (robot) face conversation. At that time, I was working as a teacher. As part of our conversation, Nadine asked me what a teacher did.

The robot Nadine is designed to look, act and talk like a human. Private photos taken at the ArtScience Museum in Singapore, 2017, where it interacted as a part of the exhibition “HUMAN+: The Future of our Species”. Private photos.

Since this experience, I have often thought about why digital technologies is (among other and more helpful digital designs) being used to design digital ‘humans’ to look and act exactly like us, and what this development means for us. Even though Nadine and similar robots did not exist 80 years ago, thoughts and discussions around this issue were already present back then.

Electronic ‘Brains’ Versus Human Brains

Already in the 1940s, when the first computers were built, terms such as electronic brains and analogies between the way a computer and a human brain work emerged. Such desriptions were criticised by computer scientists who wanted people to come to understand how machines actually work. For example, the Danish professor of computer science Peter Naur repeatedly rejected any similarities between a human brain and a computer. Instead, he focused on how a machine is different from us: “People are very hopeless at performing what an information processor is best at, namely to endlessly repeat the same action. And a computer is completely unable to describe what is the core of human consciousness, the flow of thoughts, concepts, and associations“, he stated.

In the 1960s, German-American computer scientist Joseph Weizenbaum developed a chat program called ELIZA that could simulate a conversation. The program simulated a psychotherapist. It worked by following fixed rules, which meant that it came up with predefined answers based on what a user gave of input – completely mechanically. With the program, Weizenbaum wanted to demonstrate the superficiality of human-machine communication. The machine was completely unable to understand the user; yet, many people associated the program with human-like emotions. ELIZA seemed intelligently to its users – contrary to what Weizenbaum actually intended to demonstrate.

Yet today, we talk about artificial intelligence, smart technologies, and machines that learn. Instead of focusing on how machines are different from us, systems are often designed to look and act exactly like us, and it is getting even harder to understand how they work. Since the program of a machine, like for example Nadine, is created by people to simulate human intelligence, and since it is designed to use data of real humans to ‘learn’ how to communicate, and what information to pass on when communicating with other humans, people may be misled to believe, or feel, that it is intelligent.

Furthermore, when people teach machines how to react, a number of ethical dilemmas arise. One of the dilemmas is what morality should be built into the programs. Who and what determines what is good and bad? Should algorithms alone decide who to hire and fire in a business? When are people required in a program’s decision loop? And in the case of Nadine: How should such robots be programmed to process the information if we teach them nonsense – and how should they decode nonsense from sensible information?

Understanding the Systems

“There is no way around it, we all need to understand computers,” Naur said 54 years ago, in 1968. He said that “many of us who are close to computers, and who think about their societal consequences, feel that we at every favorable opportunity must emphasize that the understanding of computer programming must be brought into general education and thus become common possession”. As a researcher within the field of technology comprehension in compulsory education, I find it absolute reasonable that compulsory education includes learning to fundamentally understand how machine learning models in our society actually work. All of us need to know how machines like Nadine works. That they have no intelligence, no human emotions, and no consciousness.

And it applies to a bunch of other machine learning systems as well which many of us use in our daily lives. All of us need to understand how music services can use users’ data to continuously predict and plan what they will listen to. How medical researchers can use patients’ data to predict diseases; how social media can use users’ data to continuously adjust what is displayed in their news feed. How breweries that use customers’ data to continuously adjust their recipes to meet their taste. How search engines can use users’ data to continuously predict what will be searched. How supermarkets can use customers’ data to continuously predict what they will buy (and not buy) – for good and for bad. All of us need to know how algorithms in programs are created by people to develop completely mechanically and adjust the output based on the inputs they receive. That a machine does not experience, does not learn, and does not understand anything in a human sense.

“When computer science is well-established in general education, the mystery that for many people surround the computers, will dissolve into nothing”, Naur argued in the 1960s. This is still necessary today for all of us to become able to decide and co-determine how we want our common society and our own lives to develop.

This is a revised, updated and translated version of a previous blog post, published at Folkeskolen.dk.

References

ArtScience Museum Singapore (2017). HUMAN+ The Future of Our Species. Exhibition. https://www.marinabaysands.com/museum/exhibition-archive/human-plus.html

Naur, P. (1967). Datamaskinerne og samfundet. Søndagsuniversitetet – Bind 85. Munksgaard.

Naur, P. (1968). Demokrati i datamatiseringens tidsalder. Kriterium, 3. årg., nr. 5, June, 1968. Nyt Nordisk Forlag Arnold Busck.

Elisa Nadire Caeli, Postdoctoral Researcher, Danish School of Education, Aarhus University Copenhagen

Quotes by Peter Naur are translated to English by me.