Background: To investigate the effectiveness of contrastive learning, in particular SimClr, in reducing the need for large annotated ultrasound (US) image datasets for fetal standard plane identification. Methods: We explore SimClr advantage in the cases of both low and high inter-class variability, considering at the same time how classification performance varies according to different amounts of labels used. This evaluation is performed by exploiting contrastive learning through different training strategies. We apply both quantitative and qualitative analyses, using standard metrics (F1-score, sensitivity, and precision), Class Activation Mapping (CAM), and t-Distributed Stochastic Neighbor Embedding (t-SNE). Results: When dealing with high inter-class variability classification tasks, contrastive learning does not bring a significant advantage; whereas it results to be relevant for low inter-class variability classification, specifically when initialized with ImageNet weights. Conclusions: Contrastive learning approaches are typically used when a large number of unlabeled data is available, which is not representative of US datasets. We proved that SimClr either as pre-training with backbone initialized via ImageNet weights or used in an end-to-end dual-task may impact positively the performance over standard transfer learning approaches, under a scenario in which the dataset is small and characterized by low inter-class variability.

On the use of contrastive learning for standard-plane classification in fetal ultrasound imaging

Moccia, Sara
2024-01-01

Abstract

Background: To investigate the effectiveness of contrastive learning, in particular SimClr, in reducing the need for large annotated ultrasound (US) image datasets for fetal standard plane identification. Methods: We explore SimClr advantage in the cases of both low and high inter-class variability, considering at the same time how classification performance varies according to different amounts of labels used. This evaluation is performed by exploiting contrastive learning through different training strategies. We apply both quantitative and qualitative analyses, using standard metrics (F1-score, sensitivity, and precision), Class Activation Mapping (CAM), and t-Distributed Stochastic Neighbor Embedding (t-SNE). Results: When dealing with high inter-class variability classification tasks, contrastive learning does not bring a significant advantage; whereas it results to be relevant for low inter-class variability classification, specifically when initialized with ImageNet weights. Conclusions: Contrastive learning approaches are typically used when a large number of unlabeled data is available, which is not representative of US datasets. We proved that SimClr either as pre-training with backbone initialized via ImageNet weights or used in an end-to-end dual-task may impact positively the performance over standard transfer learning approaches, under a scenario in which the dataset is small and characterized by low inter-class variability.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/829004
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact