Breast cancer remains a major concern for women’s lives worldwide and serves as evidence of the need for better classification strategies according to severity. Computer-aided diagnosis (CADx) powered by explainable artificial intelligence (XAI) offers a promising solution by minimizing diagnostic errors and fostering trust through a more transparent decision-making process. As XAI evolves, it plays a crucial role in increasing the interpretability of AI-driven diagnostics, particularly in distributed healthcare systems. XAI explains the model’s prediction, making clinicians more confident in accepting clinical outcomes. Accordingly, this study provides a comparative analysis of multiple deep learning models for breast cancer identification based on a publicly available dataset of 780 ultrasound images with their masks. Explanation of the classification result is then provided using the Grad-CAM method which improves the interpretability and tractability of the models. The proposed method lets the models explain their decisions visually, using heatmaps that show what part of an image contributes to the predictions in a valuable way when studying medical images. Obtained results demonstrate the XAI’s transformative potential in medical imaging, paving the way for more reliable, scalable, and efficient diagnostic tools. Also, providing a critical comparison of various types of deep learning models for breast cancer identification, the study underlines the advantages and limitations of the different architectures in solving the task.

A Comparative Analysis of Artificial Intelligence Methods for Breast Cancer Interpretation

Amelio, Alessia;Merla, Arcangelo;Scozzari, Francesca
;
2025-01-01

Abstract

Breast cancer remains a major concern for women’s lives worldwide and serves as evidence of the need for better classification strategies according to severity. Computer-aided diagnosis (CADx) powered by explainable artificial intelligence (XAI) offers a promising solution by minimizing diagnostic errors and fostering trust through a more transparent decision-making process. As XAI evolves, it plays a crucial role in increasing the interpretability of AI-driven diagnostics, particularly in distributed healthcare systems. XAI explains the model’s prediction, making clinicians more confident in accepting clinical outcomes. Accordingly, this study provides a comparative analysis of multiple deep learning models for breast cancer identification based on a publicly available dataset of 780 ultrasound images with their masks. Explanation of the classification result is then provided using the Grad-CAM method which improves the interpretability and tractability of the models. The proposed method lets the models explain their decisions visually, using heatmaps that show what part of an image contributes to the predictions in a valuable way when studying medical images. Obtained results demonstrate the XAI’s transformative potential in medical imaging, paving the way for more reliable, scalable, and efficient diagnostic tools. Also, providing a critical comparison of various types of deep learning models for breast cancer identification, the study underlines the advantages and limitations of the different architectures in solving the task.
2025
979-8-3315-1043-5
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/867653
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact