Breast cancer is one of the most prevalent and lethal conditions among women across the globe, requiring timely and accurate diagnosis to contribute to better patient outcomes. Recent studies explored the risk factors connected to breast cancer. In premenopausal women and those with BRCA genetic susceptibility, air pollution predisposes to breast cancer because environmental toxins are more capable of inducing harmful results in these vulnerable groups, particularly those residing in densely populated urban areas with elevated pollution concentrations, and in the neighborhood of construction sites. Recent decades have seen deep learning emerging as a general-purpose piece of computer-assisted diagnosis software, enabling classification and segmentation tasks in the domain of medical imaging. These models are particularly effective at detecting weak patterns within imaging data imperceptible to the human eye, drastically enhancing diagnostic efficiency. This article focuses on the task of breast cancer classification using ultrasound images. Our results pinpoint ResNet50 as the best-performing model, which has a remarkable 98.72% accuracy rate. We further interpret the model’s outcome using the XAI tool Grad-CAM by examining its capability to provide interpretable explanations. The XAI method provides clinically relevant and interpretable explanations, as supported by analysis using both the original images and their corresponding segmented masks.

Explainability in breast cancer detection

Alessia Amelio;Daniela Cardone;Eliezer Zahid Gill;Francesca Scozzari
2025-01-01

Abstract

Breast cancer is one of the most prevalent and lethal conditions among women across the globe, requiring timely and accurate diagnosis to contribute to better patient outcomes. Recent studies explored the risk factors connected to breast cancer. In premenopausal women and those with BRCA genetic susceptibility, air pollution predisposes to breast cancer because environmental toxins are more capable of inducing harmful results in these vulnerable groups, particularly those residing in densely populated urban areas with elevated pollution concentrations, and in the neighborhood of construction sites. Recent decades have seen deep learning emerging as a general-purpose piece of computer-assisted diagnosis software, enabling classification and segmentation tasks in the domain of medical imaging. These models are particularly effective at detecting weak patterns within imaging data imperceptible to the human eye, drastically enhancing diagnostic efficiency. This article focuses on the task of breast cancer classification using ultrasound images. Our results pinpoint ResNet50 as the best-performing model, which has a remarkable 98.72% accuracy rate. We further interpret the model’s outcome using the XAI tool Grad-CAM by examining its capability to provide interpretable explanations. The XAI method provides clinically relevant and interpretable explanations, as supported by analysis using both the original images and their corresponding segmented masks.
2025
Thematic Workshops at Ital-IA 2025
Luca Manzoni, Luca Bortolussi, Giulia Cisotto, Fabio Anselmi
Inglese
no
Ital-IA-TW 2025 Thematic Workshops at Ital-IA 2025
23-24/06/2025
Trieste, Italy
Internazionale
ELETTRONICO
CEUR WORKSHOP PROCEEDINGS
4121
1
7
7
CEUR-WS
Breast Cancer, Air Pollution, Ultrasound Images, Explainable AI, Grad-CAM
https://ceur-ws.org/Vol-4121/Ital-IA_2025_paper_115.pdf
no
none
Ahmad, Ijaz; Amelio, Alessia; Cardone, Daniela; Gill, Eliezer Zahid; Scozzari, Francesca
273
info:eu-repo/semantics/conferenceObject
5
4 Contributo in Atti di Convegno (Proceeding)::4.1 Contributo in Atti di convegno
   Smart Knowledge: Enhancing Argumentation and Abstraction for Explanation and Analysis
   SMARTK
   Università  degli Studi della CALABRIA
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/868853
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact