Carloni G., Tsaftaris S. A., Colantonio S.
Domain shift robustness, Out-of-distribution, Causality
Due to domain shift, deep learning image classifiers perform poorly whenapplied to a domain different from the training one. For instance, a classifiertrained on chest X-ray (CXR) images from one hospital may not generalize toimages from another hospital due to variations in scanner settings or patientcharacteristics. In this paper, we introduce our CROCODILE framework, showinghow tools from causality can foster a model's robustness to domain shift viafeature disentanglement, contrastive learning losses, and the injection ofprior knowledge. This way, the model relies less on spurious correlations,learns the mechanism bringing from images to prediction better, and outperformsbaselines on out-of-distribution (OOD) data. We apply our method to multi-labellung disease classification from CXRs, utilizing over 750000 images from fourdatasets. Our bias-mitigation method improves domain generalization andfairness, broadening the applicability and reliability of deep learning modelsfor a safer medical image analysis. Find our code at:https://github.com/gianlucarloni/crocodile.
Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15167, pp. 105-116. Marrakech, Morocco, 6-10/10/2024
@inproceedings{oai:iris.cnr.it:20.500.14243/498824, title = {CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning}, author = {Carloni G. and Tsaftaris S. A. and Colantonio S.}, booktitle = {LECTURE NOTES IN COMPUTER SCIENCE, vol. 15167, pp. 105-116. Marrakech, Morocco, 6-10/10/2024}, year = {2024} }