Corbucci L., Monreale A., Panigutti C., Natilli M., Smiraglio S., Pedreschi D.
Healthcare AI models Clinician trust
Explaining AI-based clinical decision support systems is crucial to enhancing clinician trust in those powerful systems. Unfortunately, current explanations provided by eXplainable Artificial Intelligence techniques are not easily understandable by experts outside of AI. As a consequence, the enrichment of explanations with relevant clinical information concerning the health status of a patient is fundamental to increasing human experts' ability to assess the reliability of AI decisions. Therefore, in this paper, we propose a methodology to enable clinical reasoning by semantically enriching AI explanations. Starting with a medical AI explanation based only on the input features provided to the algorithm, our methodology leverages medical ontologies and NLP embedding techniques to link relevant information present in the patient's clinical notes to the original explanation. Our experiments, involving a human expert, highlight promising performance in correctly identifying relevant information about the diseases of the patients.
Source: DS 2023: 26th International Conference on Discovery Science, pp. 216–229, Porto, Portugal, 09-11/10/2023
@inproceedings{oai:it.cnr:prodotti:490051, title = {Semantic enrichment of explanations of AI models for healthcare}, author = {Corbucci L. and Monreale A. and Panigutti C. and Natilli M. and Smiraglio S. and Pedreschi D.}, doi = {10.1007/978-3-031-45275-8_15}, booktitle = {DS 2023: 26th International Conference on Discovery Science, pp. 216–229, Porto, Portugal, 09-11/10/2023}, year = {2023} }
TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
HumanE-AI-Net
HumanE AI Network
XAI
Science and technology for the explanation of AI decision making
SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics