Brain computer interfaces will not only transform the way we treat spine and brain injuries. This technology has the potential to influence how we define a human being and how we interact with each other. Despite the fact that these concerns are hypothetical, we need to consider them now to be prepared for what this technology can bring.
Imagine having an AI chip replace part of your skull with flexible threads of electrodes inserted to your brain. This may sound like science fiction, but Elon Musk announced on February 1 that initial human trials might be conducted later this year.
Elon Musk has co-founded Neuralink, a company that develops brain computer interfaces (BCI). For now, their goal is to treat brain and spine injuries, but their overall aim is to create symbiosis between artificial intelligence and the human brain.
In August 2020, the company held a live demonstration of pigs that had Neuralinks implanted. According to Elon Musk, the pigs were “healthy and happy” and the company had even successfully removed implants without causing brain damage. The video also shows how their AI predicts the movements of a walking pig. Imagine a future were BCI can help people with severe spine injuries to walk and move again.
But interpreting brain activity is more easily said than done. Just picking up a cup of coffee activates a complex interplay between different brain regions. This is where AI enters the stage: to calculate brain patterns and precisely predict the desired movements. When AI gets involved, the process of moving limbs becomes an interplay of the human intention and the AI interpretation of the brain activity.
But what if the AI misinterprets brain activity and makes your limb act in an unintended way? Philipp Kellmeyer, a neurologist and neuroethicist from the Neuromedical AI Lab at the University of Freiburg, explains in a Nature article that situations of hybrid agency will create accountability gaps. Is it the human being or the company creating the BCI that should be held accountable for consequences caused by unintended movements? Human agency and free will need to be discussed and protected as BCI evolves. Should we just accept dual agency between human being and AI implants in the future?
A concern of the more distant future could be “body hacks”. As the neuroscientists Greg Gage demonstrates in his 2015 Ted Talk, it is possible to control the body of others when given the right input. As brain implants develop, we will need to reflect of the risk of having implants hacked losing control of our body. Getting devices that can read our brain signals and – in time – connect us directly to the internet and other human brains, will also blur our understanding of where one individual stop and the next begins. How will it change our interactions and society in general if we can log into each other’s brains or bodies? Using BCI will require massive psychological and ethical discussions on how we define and understand individual humans and social interactions.
Brain computer interfaces also raises concerns of increased inequality. If brain computer interfaces are used to enhance cognitive functions of privileged people, the gap between rich and poor could accelerate on a global scale. We will need to make sure that this technology is not only offered to a selected group of people, but is made available for all who needs the technology to treat brain or spine injuries.
But it is not only academics and scientists that need to prepare for a future with this technology. Just as regulation must be in place for accidents involving self-driving cars, regulators must be informed of possible consequences of BCI and how to deal with dual agency and the accountability gab mentioned by Kellmeyer.
Brain computer interfaces will not only transform the way we treat spine and brain injuries. This technology has the potential to influence how we define a human being and how we interact with each other. Eventually, it might even change our very species. And despite the fact that these concerns presently are hypothetical, we need to consider them now to be prepared for what this technology can bring.
Signe Agerskov is a member of the European Group on Blockchain Ethics (EGBE) and is researching blockchain ethics at the European Blockchain Center