Guidotti R., Monreale A., Giannotti F., Pedreschi D., Ruggieri S., Turini F.
Machine learning algorithms Settore INF/01 - Informatica Counterfactuals Decision making Artificial Intelligence Interpretable Machine Learning Explanation Rules Computer Networks and Communications Genetic algorithms Counterfactual Explanation Rule Explainable AI Prediction algorithms Data models Decision trees Open the Black Box Intelligent systems
The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.
Source: IEEE intelligent systems 34 (2019): 14–22. doi:10.1109/MIS.2019.2957223
Publisher: IEEE Computer Society,, Los Alamitos, CA , Stati Uniti d'America
@article{oai:it.cnr:prodotti:417414, title = {Factual and counterfactual explanations for black box decision making}, author = {Guidotti R. and Monreale A. and Giannotti F. and Pedreschi D. and Ruggieri S. and Turini F.}, publisher = {IEEE Computer Society,, Los Alamitos, CA , Stati Uniti d'America}, doi = {10.1109/mis.2019.2957223}, journal = {IEEE intelligent systems}, volume = {34}, pages = {14–22}, year = {2019} }