Metta C., Beretta A., Pellungrini R., Rinzivillo S., Giannotti F.
explainable artificial intelligence Biology (General) QH301-705.5 T Technology Machine learning Explainable artificial intelligence Review machine learning Artificial intelligence artificial intelligence
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.
Source: BIOENGINEERING, vol. 11 (issue 4)
@article{oai:iris.cnr.it:20.500.14243/513830, title = {Towards transparent healthcare: advancing local explanation methods in Explainable Artificial Intelligence}, author = {Metta C. and Beretta A. and Pellungrini R. and Rinzivillo S. and Giannotti F.}, doi = {10.3390/bioengineering11040369}, year = {2024} }
CREXDATA
Critical Action Planning over Extreme-Scale Data
TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
HumanE-AI-Net
HumanE AI Network
XAI
Science and technology for the explanation of AI decision making
SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics