There are two different cognitive biases, each pulling in opposite directions in terms of trust. The first is automation bias, the second is algorithm aversion bias.
Artificially intelligent systems are on the rise in health care. Aimed at augmenting or enhancing diagnostics, operations, and treatment regimes, AI medical applications are now being implemented in various ways throughout health care systems all over the world.
However, it has become clear that implementing AI decision support systems in health care (as well as other domains) poses two different challenges that go in opposite directions: sometimes people trust machines too much, and sometimes people do not trust machines enough.
Just like the little girl in the story about Goldielocks and the three bears, we need to find the right balance when it comes to trust in AI health care devices. Not too little; not too much.
To do this, however, we need to confront two different cognitive biases, each pulling in opposite directions in terms of trust. The first is called automation bias, and it refers to a blind preference for automated responses over manual sources of information.
There are many examples of automation bias in action. For instance, consider this story about an American couple who got lost in the wilderness by blindly following instructions from a GPS unit. Or the story of three women who could not find their way home late at night and ended up in a lake after re-routing their GPS unit. Presumably, such incidents happen fairly often. Instead of using traditional maps, asking for directions, or listening to gut feelings, some people blindly follow automated responses like those coming from a GPS unit, while ignoring manual signs indicating that something is wrong.
The second bias, pulling the other way, is called algorithm aversion bias, and it conversely refers to an unfounded preference for manual responses over automated sources of information. This bias is in many ways the flip side of automation bias, leading people to distrust automated responses – even when it is clear that automated responses are more accurate and reliable than manual responses.
Both kinds of bias can lead to critical mistakes in the context of health care.
For instance, putting too much stock into algorithms can lead to health care workers prescribing the wrong kinds of medicine, or doctors performing wrong operations. Conversely, not trusting algorithmic decision support systems enough can similarly lead to missed health care opportunities and medical errors that could have been avoided.
It is therefore crucial to navigate between these biases and find the right balance of trust in AI health care systems, or the Goldielocks zone of trust in AI, if we are to realise the huge potential of artificially intelligent decision support systems within the health care system.
Full Disclosure Is One Remedy
One apparent way to deal with the most egregious cases of automation and algorithm aversion bias is to disclose estimations on the accuracy or confidence level properly assigned to the response in question. By giving people such estimations, they are forced to acknowledging, on the one hand, that automated responses are not infallible, and on the other hand, that AI recommendations can be well supported and relatively certain.
For automation bias in particular, possible remedies include highlighting errors, emphasising the supportive (as opposed to directive) nature of AI recommendations, and decreasing the complexity of the information that flows from the device. Notably, this needs to be done on a continuous basis, as automation bias can quickly creep back in as people get more and more used to a particular device or application.
For algorithm aversion bias in particular, it seems that one remedy is to increase user understanding of algorithmic decision processes, as subjective understanding is seemingly correlated with willingness to utilise algorithmic decision support systems. That is, health care workers are more inclined to use AI health care devices when they can understand the process leading to a particular recommendation.
It is worth noting, however, that some of these remedies also pull in different directions. Highlighting errors, for instance, will presumably not only decrease automation bias but also increase algorithm aversion bias, as people are generally quick to lose confidence in machines when they visibly err. Striking a balance in this regard will therefore be difficult.
It is also worth noting that cognitive biases are generally at their strongest when people are low on cognitive resources. It is therefore particularly troublesome that pressure and stress amongst health care workers are on the rise in many parts of the world due to a combination of COVID-19, new (and often expensive) treatment technologies, and increasingly ageing populations.
Indeed, this scarcity of health care resources is often cited as one of the primary reasons that we need to implement AI decision support systems in health care in the first place. However, no matter how much potential these AI health care technologies may have, it does not really matter if people do not trust them when they should or trust them when they should not.
It is therefore imperative that we find the right balance of trust in AI. Not too little; not too much.
Photo: Unsplash.com