2023
Journal article  Open Access

Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning

Metta C., Beretta A., Guidotti R., Yin Y., Gallinari P., Rinzivillo S., Giannotti F.

Computational Theory and Mathematics  Modeling and Simulation  Computer Science Applications  Explainable artificial intelligence  Information Systems  Dermoscopic images  Applied Mathematics  Skin image analysis  Adversarial autoencoders 

A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.

Source: International Journal of Data Science and Analytics (Print) (2023). doi:10.1007/s41060-023-00401-z

Publisher: Springer


Metrics



Back to previous page
BibTeX entry
@article{oai:it.cnr:prodotti:486703,
	title = {Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning},
	author = {Metta C. and Beretta A. and Guidotti R. and Yin Y. and Gallinari P. and Rinzivillo S. and Giannotti F.},
	publisher = {Springer},
	doi = {10.1007/s41060-023-00401-z},
	journal = {International Journal of Data Science and Analytics (Print)},
	year = {2023}
}

TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization

HumanE-AI-Net
HumanE AI Network

XAI
Science and technology for the explanation of AI decision making

SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics


OpenAIRE