Corbucci L., Guidotti R., Monreale A.
Explainable AI Federated learning Features importance
Federated Learning has witnessed increasing popularity in the past few years for its ability to train Machine Learning models in critical contexts, using private data without moving them. Most of the work in the literature proposes algorithms and architectures for training neural networks, which although they present high performance in different predicting tasks and are easy to be learned with a cooperative mechanism, their predictive reasoning is obscure. Therefore, in this paper, we propose a variant of SHAP, one of the most widely used explanation methods, tailored to Horizontal server-based Federated Learning. The basic idea is having the possibility to explain an instance's prediction performed by the trained Machine Leaning model as an aggregation of the explanations provided by the clients participating in the cooperation. We empirically test our proposal on two different tabular datasets, and we observe interesting and encouraging preliminary results.
Source: xAI 2023 - World Conference on Explainable Artificial Intelligence, pp. 151–163, Lisbon, Portugal, 26-28/07/2023
@inproceedings{oai:it.cnr:prodotti:490387, title = {Explaining black-boxes in federated learning}, author = {Corbucci L. and Guidotti R. and Monreale A.}, doi = {10.1007/978-3-031-44067-0_8}, booktitle = {xAI 2023 - World Conference on Explainable Artificial Intelligence, pp. 151–163, Lisbon, Portugal, 26-28/07/2023}, year = {2023} }
TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
XAI
Science and technology for the explanation of AI decision making
SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics
Humane AI
Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us