Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138-52160
Alvarez-Melis D, Jaakkola TS (2018) Towards robust interpretability with self-explaining neural networks. In: NeurIPS, pp 7786-7795
Angelino E, Larus-Stone N, Alabi D, Seltzer MI, Rudin C (2017) Learning certifiably optimal rule lists for categorical data. J Mach Learn Res 18:234:1-234:78
Assche AV, Blockeel H (2007) Seeing the forest through the trees: learning a comprehensible model from an ensemble. In: ECML. Lecture notes in computer science, vol 4701. Springer, pp 418-429
Bäck T, Fogel DB, Michalewicz Z (2000) Evolutionary computation 1: basic algorithms and operators, vol 1. CRC Press, Boca Raton
Bénard C, Biau G, Veiga SD, Scornet E (2019) SIRUS: making random forests interpretable. CoRR arXiv:1908.06852
Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in criminal justice risk assessments: the state of the art. Sociol Methods Res 50(1):3-44
Bhatt U, Xiang A, Sharma S, Weller A, Taly A, Jia Y, Ghosh J, Puri R, Moura JMF, Eckersley P (2020) Explainable machine learning in deployment. In: FAT*, ACM, pp 648-657
Breiman L (1996) Bagging predictors. Mach Learn 24(2):123-140
Breiman L (2001) Statistical modeling: the two cultures (with comments and a rejoinder by the author). Stat Sci 16:199-231
Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth, Belmont
Byrne RM (2016) Counterfactual thought. Annu Rev Psychol 67(1):135-157
Byrne RMJ, Johnson-Laird P (2009) “If” and the problems of conditional reasoning. Trends Cogn Sci 13(9):282-287
Calegari R, Ciatto G, Denti E, Omicini A (2020) Logic-based technologies for intelligent systems: state of the art and perspectives. Information 11(3):167
Chou Y, Moreira C, Bruza P, Ouyang C, Jorge JA (2022) Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications. Inf Fusion 81:59-83
Darwiche A, Hirth A (2020) On the reasons behind decisions. In: ECAI, IOS Press, frontiers in artificial intelligence and applications, vol 325, pp 712-720
Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1-30
Derrac J, García S, Herrera F (2010) A survey on evolutionary instance selection and generation. Int J Appl Metaheuristic Comput 1(1):60-92
Doshi-Velez F, Kim B (2017) A roadmap for a rigorous science of interpretability. CoRR arXiv:1702.08608
Evans BP, Xue B, Zhang M (2019) What's inside the black-box? A genetic programming method for interpreting complex machine learning models. In: GECCO, ACM, pp 1012-1020
Fan C et al (2020) Classification acceleration via merging decision trees. In: FODS, ACM, pp 13-22
Fortin F, Rainville FD, Gardner M, Parizeau M, Gagné C (2012) DEAP: evolutionary algorithms made easy. J Mach Learn Res 13:2171-2175
Freitas AA (2013) Comprehensible classification models: a position paper. SIGKDD Explor 15(1):1-10
Fu Y, Zhu X, Li B (2013) A survey on instance selection for active learning. Knowl Inf Syst 35(2):249-283
Gosiewska A, Biecek P (2020) Do not trust additive explanations. CoRR arXiv:1903.11420
Guidotti R (2021) Evaluating local explanation methods on ground truth. Artif Intell 291:103428
Guidotti R (2022) Counterfactual explanations and how to find them: literature review and benchmarking. Data Min Knowl Discov. https://doi.org/10.1007/s10618-022-00831-6
Guidotti R, Monreale A (2020) Data-agnostic local neighborhood generation. In: ICDM, IEEE, pp 1040- 1045
Guidotti R, Ruggieri S (2019) On the stability of interpretable models. In: IJCNN, IEEE, pp 1-8
Guidotti R, Monreale A, Cariaggi L (2019a) Investigating neighborhood generation methods for explanations of obscure image classifiers. In: PAKDD (1). Lecture notes in computer science, vol 11439. Springer, pp 55-68
Guidotti R, Monreale A, Giannotti F, Pedreschi D, Ruggieri S, Turini F (2019b) Factual and counterfactual explanations for black box decision making. IEEE Intell Syst 34(6):14-23
Guidotti R, Monreale A, Matwin S, Pedreschi D (2019c) Black box explanation by learning image exemplars in the latent feature space. In: ECML/PKDD (1). Lecture notes in computer science, vol 11906. Springer, pp 189-205
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2019d) A survey of methods for explaining black box models. ACM Comput Surv 51(5):93:1-93:42
Guyon I (2003) Design of experiments of the NIPS 2003 variable selection benchmark. In: NIPS workshops
Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge
Jia Y, Bailey J, Ramamohanarao K, Leckie C, Houle ME (2019) Improving the quality of explanations with local embedding perturbations. In: KDD, ACM, pp 875-884
Karimi A, Barthe G, Schölkopf B, Valera I (2020) A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. CoRR arXiv:2010.04050
Klimke A (2003) RANDEXPR: a random symbolic expression generator. Technical report 4, Universitat Stuttgart
Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: KDD, ACM, pp 1675-1684
Laugel T, Renard X, Lesot M, Marsala C, Detyniecki M (2018) Defining locality for surrogates in post-hoc interpretablity. CoRR arXiv:1806.07498
Li X, Cao CC, Shi Y, Bai W, Gao H, Qiu L, Wang C, Gao Y, Zhang S, Xue X, Chen L (2022) A survey of data-driven and knowledge-aware explainable AI. IEEE Trans Knowl Data Eng 34(1):29-49
Liu FT, Ting KM, Zhou ZH (2008) Isolation forest. In: 2008 eighth IEEE international conference on data mining, IEEE, pp 413-422
Lucic A, Oosterhuis H, Haned H, de Rijke M (2019) Actionable interpretability through optimizable counterfactual explanations for tree ensembles. CoRR arXiv:1911.12199
Lucic A, Haned H, de Rijke M (2020) Why does my model fail? Contrastive local explanations for retail forecasting. In: FAT*, ACM, pp 90-98
Lundberg SM, Lee S (2017) A unified approach to interpreting model predictions. In: NIPS, pp 4765-4774
Malgieri G, Comandé G (2017) Why a right to legibility of automated decision-making exists in the GDPR. Int Data Privacy Law 7(4):243-265
McCane B, Albert M (2008) Distance functions for categorical and mixed variables. Pattern Recognit Lett 29(7):986-993
Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1-38
Ming Y, Qu H, Bertini E (2019) Rulematrix: visualizing and understanding classifiers with rules. IEEE Trans Vis Comput Graph 25(1):342-352
Minh D, Wang HX, Li YF, Nguyen TN (2022) Explainable artificial intelligence: a comprehensive review. Artif Intell. Review To appear
Molnar C (2019) Interpretable machine learning. Lulu Press, Morrisville
Moraffah R, Karami M, Guo R, Raglin A, Liu H (2020) Causal interpretability for machine learning: problems, methods and evaluation. SIGKDD Explor 22(1):18-33
Mothilal RK, Sharma A, Tan C (2020) Explaining machine learning classifiers through diverse counterfactual explanations. In: FAT*, ACM, pp 607-617
Murthy SK, Kasif S, Salzberg S (1994) A system for induction of oblique decision trees. J Artif Intell Res 2:1-32
Ntoutsi E et al (2020) Bias in data-driven artificial intelligence systems: an introductory survey. WIREs Data Min Knowl Discov 10(3):e1356
Olvera-López JA, Carrasco-Ochoa JA, Trinidad JFM, Kittler J (2010) A review of instance selection methods. Artif Intell Rev 34(2):133-143
Panigutti C, Guidotti R, Monreale A, Pedreschi D (2020) Explaining multi-label black-box classifiers for health applications. In: Precision health and medicine, studies in computational intelligence, vol 843. Springer, pp 97-110
Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge
Pedreschi D, Giannotti F, Guidotti R, Monreale A, Ruggieri S, Turini F (2019) Meaningful explanations of black box AI decision systems. In: AAAI, AAAI Press, pp 9780-9784
Plumb G, Molitor D, Talwalkar AS (2018) Model agnostic supervised local explanations. In: NeurIPS, pp 2520-2529
Ribeiro MT, Singh S, Guestrin C (2016) "Why should I trust you?": explaining the predictions of any classifier. In: KDD, ACM, pp 1135-1144
Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: AAAI, AAAI Press, pp 1527-1535
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. NMI 1:206
Russell C (2019) Efficient search for diverse coherent explanations. In: FAT, ACM, pp 20-28
Sagi O, Rokach L (2018) Ensemble learning: a survey. WIREs Data Min Knowl Discov 8(4):e1249
Sagi O, Rokach L (2020) Explainable decision forest: transforming a decision forest into an interpretable tree. Inf Fusion 61:124-138
Sharma S, Henderson J, Ghosh J (2019) CERTIFAI: counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. CoRR arXiv:1905.07857
Shih A, Choi A, Darwiche A (2018) A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, ijcai.org, pp 5103-5111
Sokol K, Flach PA (2019) Desiderata for interpretability: explaining decision tree predictions with counterfactuals. In: AAAI, AAAI Press, pp 10035-10036
Strecht P, Mendes-Moreira J, Soares C (2014) Merging decision trees: a case study in predicting student performance. In: ADMA. Lecture notes in computer science, vol 8933. Springer, 535-548
Sundararajan M, Najmi A (2020) The many Shapley values for model explanation. In: ICML, PMLR, proceedings of machine learning research, vol 119, pp 9269-9278
Tan P, Steinbach MS, Kumar V (2005) Introduction to data mining. Addison-Wesley, Reading
Tsai C, Eberle W, Chu C (2013) Genetic algorithms in feature and instance selection. Knowl Based Syst 39:240-247
Venkatasubramanian S, Alfano M (2020) The philosophical basis of algorithmic recourse. In: FAT*, ACM, pp 284-293
Verma S, Dickerson JP, Hines K (2020) Counterfactual explanations for machine learning: a review. CoRR arXiv:2010.10596
Vidal T, Schiffer M (2020) Born-again tree ensembles. In: ICML, PMLR, proceedings of machine learning research, vol 119, pp 9743-9753
Virgolin M, Alderliesten T, Bosman PAN (2020) On explaining machine learning models by evolving crucial and compact features. Swarm Evol Comput 53:100640
Wachter S et al (2017) Counterfactual explanations without opening the black box. Harv JL Technol 31:841
Wu S, Olafsson S (2006) Optimal instance selection for improved decision tree induction. In: IIE, IISE, p 1
Yang H, Rudin C, Seltzer MI (2017) Scalable Bayesian rule lists. In: ICML, PMLR, proceedings of machine learning research, vol 70, pp 3921-3930
Zafar MR, Khan NM (2019) DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. CoRR arXiv:1906.10263
Zhang Y, Song K, Sun Y, Tan S, Udell M (2019) Why should you trust my explanation? Understanding uncertainty in LIME. arXiv:1904:12991