White Light (WL) and Narrow Band Imaging (NBI) endoscopy are widely used to assess the superficial spreading of laryngeal squamous cell carcinoma (LSCC). However, the analysis of images requires a high level of attention and extensive clinical expertise, leading to inter-clinician variability on the assessment of tumor margins. Computer-aided segmentation can automate the identification of LSCC margins, supporting clinicians in this challenging task. In this paper, we present SegMENT-Plus, a Deep Learning segmentation convolutional network specifically developed and optimized for accurate delineation of LSCC. SegMENT-Plus uses EfficienstNetB5 as encoder with a new modified Atrous Spatial Pyramid Pooling (m-ASPP) block that integrates Channel Block Attention Module (CBAM) and Squeeze Excitation (SE). In this new architecture, CBAM extracts local and global LSCC features from the encoder, while the SE block refines cancer segmentation on each dilated convolution output. SegMENT-Plus was trained and evaluated on a multi-center dataset including clinical data from three different hospitals. A total of 4289 annotated laryngeal images from 766 patients were included in this study. The experiments showed that SegMENT-Plus achieved a Dice Similarity Coefficient (DSC) between 81.4% and 84.9% and an Intersection over Union (IOU) between 81.8% and 85.7% on the data from the different hospitals, attesting its high performance and generalization capability. The proposed segmentation architecture also demonstrated statistically significant improvement in DSC and IoU compared to other state of the art archi-tectures, showing that this work is a concrete foundation towards a clinical system for the automatic delineation of LSCC margins in endoscopic images.

Automatic delineation of laryngeal squamous cell carcinoma during endoscopy

Moccia, Sara;
2024-01-01

Abstract

White Light (WL) and Narrow Band Imaging (NBI) endoscopy are widely used to assess the superficial spreading of laryngeal squamous cell carcinoma (LSCC). However, the analysis of images requires a high level of attention and extensive clinical expertise, leading to inter-clinician variability on the assessment of tumor margins. Computer-aided segmentation can automate the identification of LSCC margins, supporting clinicians in this challenging task. In this paper, we present SegMENT-Plus, a Deep Learning segmentation convolutional network specifically developed and optimized for accurate delineation of LSCC. SegMENT-Plus uses EfficienstNetB5 as encoder with a new modified Atrous Spatial Pyramid Pooling (m-ASPP) block that integrates Channel Block Attention Module (CBAM) and Squeeze Excitation (SE). In this new architecture, CBAM extracts local and global LSCC features from the encoder, while the SE block refines cancer segmentation on each dilated convolution output. SegMENT-Plus was trained and evaluated on a multi-center dataset including clinical data from three different hospitals. A total of 4289 annotated laryngeal images from 766 patients were included in this study. The experiments showed that SegMENT-Plus achieved a Dice Similarity Coefficient (DSC) between 81.4% and 84.9% and an Intersection over Union (IOU) between 81.8% and 85.7% on the data from the different hospitals, attesting its high performance and generalization capability. The proposed segmentation architecture also demonstrated statistically significant improvement in DSC and IoU compared to other state of the art archi-tectures, showing that this work is a concrete foundation towards a clinical system for the automatic delineation of LSCC margins in endoscopic images.
File in questo prodotto:
File Dimensione Formato  
Biomed Sign Process Control 2024 Azam_compressed.pdf

Solo gestori archivio

Tipologia: PDF editoriale
Dimensione 967.47 kB
Formato Adobe PDF
967.47 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/828999
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact