Artificial Intelligence algorithms are powerful in performing accurate predictions, but they are often considered black boxes as they do not provide any explanation about how outputs are derived from inputs or why a decision is taken. Therefore, urgent is the need of a completely transparent and eXplainable Artificial Intelligence (XAI) as also recognized by the explicit inclusion of the right to explanation in the General Data Protection Regulation (GDPR). There has been much study on diagnosis, decision support, and interpretability, and there is significant interest in the development of Explainable AI in the realm of medicine. Interpretability in the medical field is not just an intectual curiosity, but a key factor. Medical choices impact the life of patients, and include risk and responsibility for the clinicians. This proposal investigates the benefit of using logic approaches for eXplainable AI by evidencing how their natural characteristics of explainability and expressiveness help in the design of ethical, explainable and justified intelligent systems. More specifically, the paper focuses on a detailed topic related to the use of argumentation theory in Medical Informatics by overviewing existing approaches in the literature. The overview categorizes approaches on the basis of the specific purpose the argumentation is used for, into the following categories: Argumentation for Medical Decision Making, Argumentation for Medical Explanations and Argumentation for Medical Dialogues.

Argumentation approaches for explanaible AI in medical informatics

Caroprese L.;
2022-01-01

Abstract

Artificial Intelligence algorithms are powerful in performing accurate predictions, but they are often considered black boxes as they do not provide any explanation about how outputs are derived from inputs or why a decision is taken. Therefore, urgent is the need of a completely transparent and eXplainable Artificial Intelligence (XAI) as also recognized by the explicit inclusion of the right to explanation in the General Data Protection Regulation (GDPR). There has been much study on diagnosis, decision support, and interpretability, and there is significant interest in the development of Explainable AI in the realm of medicine. Interpretability in the medical field is not just an intectual curiosity, but a key factor. Medical choices impact the life of patients, and include risk and responsibility for the clinicians. This proposal investigates the benefit of using logic approaches for eXplainable AI by evidencing how their natural characteristics of explainability and expressiveness help in the design of ethical, explainable and justified intelligent systems. More specifically, the paper focuses on a detailed topic related to the use of argumentation theory in Medical Informatics by overviewing existing approaches in the literature. The overview categorizes approaches on the basis of the specific purpose the argumentation is used for, into the following categories: Argumentation for Medical Decision Making, Argumentation for Medical Explanations and Argumentation for Medical Dialogues.
File in questo prodotto:
File Dimensione Formato  
ISWA 2022.pdf

Solo gestori archivio

Tipologia: PDF editoriale
Dimensione 892.87 kB
Formato Adobe PDF
892.87 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/794913
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 26
  • ???jsp.display-item.citation.isi??? ND
social impact