There are good reasons to disclose information to patients about using artificial intelligence as part of their diagnosis, treatment, or outcome predictions when it comes to serious medical decisions.
Suppose you went to the hospital due to an illness, and suppose you were given a diagnosis, a recommendation for treatment, and a predicted outcome by a doctor. Would it make a difference to you if your diagnosis, treatment plan, and outcome predictions were based on a technology using artificial intelligence? Would you want to know? In fact, if you have been to the hospital recently, do you know if artificial intelligence was used as part of your treatment?
Health care systems around the world are currently seeing a surge of technology based on artificial intelligence, like machine learning or neural networks, aiming to optimize treatments and improving hospital work flows.
For patients, however, it is not always clear when and to what extent these systems are being used and relied upon. By the same token, it is not always clear to patients whether they are being used to further the best interests of the individual patient or simply to preserve scare resources in the health care system.
Offhand, this lack of information seems to be in tension with the moral requirement of informed consent, which is usually considered a foundational pillar of medical ethics. Barring a few exceptional circumstances, it is widely agreed upon that patients should be allowed to make their own medical decisions, guided by relevant information as well as their own values, principles, and beliefs.
There are a number of reasons why informed consent is usually considered a moral requirement in the medical domain. For instance, this requirement arguably protects patients against harm, promotes autonomy, and increases trust. I think these reasons also provide grounds for disclosing information to patients when artificial intelligence/machine learning is used to guide serious medical decisions.
Firstly, requiring disclosure about using artificial intelligence to support or make serious medical decisions arguably protects patients from harm. This is because such a requirement presumably counteracts blind reliance on artificial intelligence technology, since treatment recommendations based on artificial intelligence will have to be explained to patients on an individual basis. This will presumably help keep health care workers on their toes.
Secondly, disclosing information about using artificial intelligence also promotes the autonomy of patients to the extent that patients in fact want to be informed. This may be the case for some patients and not for others. However, unless patients are actually asked about their preferences for information in this regard, there is no way to determine whether this is the case on an individual level. Considering that some patients do seem to have apprehensions when it comes to using artificial intelligence in the health care system, there are grounds for expecting that some patients would like to be informed in this regard.
Thirdly, disclosing information increases trust. Many people are skeptical towards new technology in particular or the health care system in general. A lack of transparency, when it comes to using artificial intelligence, may potentially breed distrust in the health care system, which could have severely bad consequences. People need to be able to trust their doctors. Moreover, disclosing relevant information could hopefully also counteract misplaced distrust in reliable and trustworthy systems.
Disclosing to patients when artificial intelligence/machine learning in used in the doctor’s decision-making protects patients against harm, promotes autonomy, and increases trust.
Rune Kligenberg, Danish National Center for Ethics
Based on the above, there are seemingly good reasons to disclose information to patients about using artificial intelligence as part of their diagnosis, treatment, or outcome predictions when it comes to serious medical decisions.
Of course, artificial intelligence systems can be difficult to explain and understand. Some of them because their inner workings are closed off to outside interpretation (so called black box algorithms), and some of them because they reach their decisions through data sets too large to be immediately comprehended.
However, this complexity is no good reason not to inform patients. On the contrary, the complexity of these systems is one of the reasons why some of them carry certain risks in terms of discrimination and mismatching.
Beware of Mismatching
Mismatching is when relevant background information is not taken into account, and subjects are consequently matched to the wrong group. For instance, an algorithm may determine that patients with asthma are less likely to die of pneumonia than patients without asthma, because the algorithm cannot distinguish between causation and correlation, and patients with asthma usually receive a certain treatment that lowers their risk of developing severe pneumonia.
Such an algorithm could potentially match asthmatic patients who do not receive this particular treatment, and who will therefore not have a lower but a higher risk of developing severe pneumonia, to the wrong group in terms of risk profile.
So the next time you go to the hospital, and artificial intelligence systems are used as part of your treatment, make sure to ask any questions you may have. For instance, which steps have been taken to avoid potential mismatching in your case? And has the algorithm in question been trained on a population that represents you?
Photo: Unsplash.com, Piron Guillaume