Rizzo M., Veneri A., Albarelli A., Lucchese C., Nobile M., Conati C.
Explainability Machine learning Biomedicine
EXplainable Artificial Intelligence (XAI) is a vibrant research topic in the artificial intelligence community. It is raising growing interest across methods and domains, especially those involving high stake decision-making, such as the biomedical sector. Much has been written about the subject, yet XAI still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that synthesizes what can be found in the literature. We recognize that explanations are not atomic but the combination of evidence stemming from the model and its input-output mapping, and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation is an accurate description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation seems convincing to the user). Our theoretical framework simplifies how these properties are operationalized, and it provides new insights into common explanation methods that we analyze as case studies. We also discuss the impact that our framework could have in biomedicine, a very sensitive application domain where XAI can have a central role in generating trust.
Source: CIBCB 2023 - IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Eindhoven, The Netherlands, 29-31/08/2023
@inproceedings{oai:it.cnr:prodotti:488083, title = {A theoretical framework for AI models explainability with application in biomedicine}, author = {Rizzo M. and Veneri A. and Albarelli A. and Lucchese C. and Nobile M. and Conati C.}, doi = {10.1109/cibcb56990.2023.10264877}, booktitle = {CIBCB 2023 - IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Eindhoven, The Netherlands, 29-31/08/2023}, year = {2023} }