Pachetti E., Tsaftaris S. A., Colantonio S.
Few-shot learning Self-supervised learning Disentangled representation learning
Background and objective: Employing deep learning models in critical domainssuch as medical imaging poses challenges associated with the limitedavailability of training data. We present a strategy for improving theperformance and generalization capabilities of models trained in low-dataregimes. Methods: The proposed method starts with a pre-training phase, wherefeatures learned in a self-supervised learning setting are disentangled toimprove the robustness of the representations for downstream tasks. We thenintroduce a meta-fine-tuning step, leveraging related classes betweenmeta-training and meta-testing phases but varying the granularity level. Thisapproach aims to enhance the model's generalization capabilities by exposing itto more challenging classification tasks during meta-training and evaluating iton easier tasks but holding greater clinical relevance during meta-testing. Wedemonstrate the effectiveness of the proposed approach through a series ofexperiments exploring several backbones, as well as diverse pre-training andfine-tuning schemes, on two distinct medical tasks, i.e., classification ofprostate cancer aggressiveness from MRI data and classification of breastcancer malignity from microscopic images. Results: Our results indicate thatthe proposed approach consistently yields superior performance w.r.t. ablationexperiments, maintaining competitiveness even when a distribution shift betweentraining and evaluation data occurs. Conclusion: Extensive experimentsdemonstrate the effectiveness and wide applicability of the proposed approach.We hope that this work will add another solution to the arsenal of addressinglearning issues in data-scarce imaging domains.
Source: COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE UPDATE
@article{oai:iris.cnr.it:20.500.14243/498821, title = {Boosting few-shot learning with disentangled self-supervised learning and meta-learning for medical image classification}, author = {Pachetti E. and Tsaftaris S. A. and Colantonio S.}, year = {2024} }