Skip links

What Does It Mean To Be Human?

By Sille Obelitz Søe and Jens-Erik Mai

“What does it mean to be human?” is perhaps the most fundamental and important question we can ask at the moment. This might sound odd in a time where the world is fighting a pandemic, democracies become more divided, and we are facing challenges such as climate change. However, think about it for a second. The means we retort to when dealing with these challenges are new technologies, intelligent ‘machines’. We have a faith in artificial intelligence and smart devices as the answers to all our problems (just think about the great variety of contract-tracing apps that has appeared recently). With the increased reliance on artificial intelligence (AI) comes the ethical considerations of the hierarchies between humans and machines, as well as the impact of AI on humans. At the foundation of these ethical considerations lies the question “What does it mean to be human?”

h heyerlein, unsplash.com

Thus, an answer to this question should permeate all our discussions of AI technologies, their use, their purpose, and their limitations. It is a huge question and very essential. In a sense asking questions like this one, is part of what it means to be human as compared to ‘machines’. It is the capacity to ask about our own condition, our nature – to reflect – that currently separates us from AI. When we are talking about AI, we are talking about machine learning algorithms, deep neural networks, and automated systems used for decision-making. The fully autonomous, conscious, human-like, and highly intelligent AI is at least some time out in the future and according to many AI researchers, it will never be achieved. When asking about what it means to be human, we are also asking about whether such an AI should be strived for. Thus, it is in the relation between humans and ‘machines’ (AI) that the question about what it means to be human has its current force and importance. Yet, the question is most often implicit rather than explicit. If you break down the ethical discussions of, for instance, AI, algorithmic accountability, and automated decision-making, and ask why they are important you end up with the question of the nature of humanity.

So, what does it mean to be human? According to Brett Frischmann and Evan Selinger in their book Re-engineering Humanity it has all to do with free will, autonomy, and agency.

Being human is about the capacity to decide for yourself (free will) combined with the ability to act out your will (autonomy and agency).

Within the combination of free will and autonomy lies the capacity for moral reasoning and responsibility – one of the fundamental capacities distinguishing humans from ‘machines’. Thus, in relation to artificial intelligence, the answer means that the use of AI should never restrict or in other ways limit people’s free will, and it should not restrict their autonomy and options for action. Further, it should still be possible for people to make their own moral judgements and not be bound by an automated outcome calculated by a system, which they might not be able to understand.

According to Shoshana Zuboff, another fundamental part of being human is our need to be together. We are essentially social beings. This poses questions to our relation to ‘machines’ – the development and use of technology and the status new digital technologies have gotten in our lives and societies.

Social Beings Need Moral Reasoning
Although free will, autonomy, and moral decision-making are often framed as individual human capacities, it is actually in the social domain they become most important. It is because we are social beings in need of figuring out how to be together, that we are in need of moral reasoning and decision-making.

Our need to be together cannot be fully satisfied by ‘machines’.

In a time where we need to stay physically apart, technology can help us some of the way – we can have video-meetings and talk on the phone – but we have also learned that the human mediated by the screen can never fulfil our need to be physically together. Further, where moral decision-making might actually be something we can incorporate into AI – if we can agree on which moral theory to work from – the social aspect of humanity is unique to humans. AI is not in need of a social life. Further, and more importantly, social intelligence – the human capacity to be around other humans, to act according to how others act, to know how to be social – is one of the very hard problems in AI. In a sense, human sociality is all about the ability to understand the context – the social context – a context that is unfixed and continuously changing. AI systems and other automated systems have a notoriously hard time understanding context. And, it is not straightforward to train a system to understand sociality and to obtain social intelligence. Social intelligence, that is, understanding a situation from a different perspective, being able to project what is going on from another human being’s point of view is a fundamental and strictly human capacity.

If we do not think about what status ‘machines’ should have in relation to humans – that is to think about what it means to be human – we risk that AI gets primacy in relation to humans. We tend to focus on where AI exceeds us and thus overlook the limitations. When dealing with challenges such as the pandemic, climate change, and especially the shape of democracy, we should ask whether ‘machines’ are the right ones to solve our problems? Are they actually capable of making the best decisions – decisions that make sense in a social context? Or, do we need humans with their unique social skills, who are able to put themselves in others’ position to be in charge of our efforts? Off cause ‘machines’ can help us, but they should do no more than that. It is imperative that we at all times keep in mind what it means to be human.

About the authors:

Sille Obelitz Søe is Tenure-Track Assistant Professor in Philosophy of Information at University of Copenhagen, Department of Communication. Jens-Erik Mai is Professor and Head of Department at University of Copenhagen, Department of Communication.