2019
Conference article  Open Access

On the stability of interpretable models

Guidotti R., Ruggieri S.

Computer Science - Machine Learning  Classifiers  Statistics - Machine Learning  Model Stability  Interpretability  Computer Science - Artificial Intelligence 

Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models.

Source: IJCNN 2019 - International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14-19 July, 2019


Citations

1. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. CoRR abs/1806.08049 (2018)
2. Bousquet, O., Elisseeff, A.: Stability and generalization. Journal of Machine Learning Research 2, 499-526 (2002)
3. Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and regression trees. CRC press (1984)
4. Breslow, L.A., Aha, D.W.: Simplifying decision trees: A survey. The Knowledge Engineering Review 12, 1-40 (1997)
5. Cohen, W.W.: Fast effective rule induction. In: ICML. pp. 115-123. Morgan Kaufmann (1995)
6. Craven, M.W., Shavlik, J.W.: Using sampling and queries to extract rules from trained neural networks. In: Machine Learning Proceedings 1994, pp. 37-45. Elsevier (1994)
7. Danks, D., London, A.J.: Algorithmic bias in autonomous systems. In: IJCAI. pp. 4691- 4697. ijcai.org (2017)
8. Demsar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1-30 (2006)
9. Freitas, A.A.: Comprehensible classification models: A position paper. ACM SIGKDD explorations newsletter 15(1), 1-10 (2014)
10. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rulebased explanations of black box decision systems. CoRR abs/1805.10820 (2018)
11. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. ACM CSUR 51(5), 93:1-93:42 (Aug 2018)
12. Guyon, I., Nikravesh, M., Gunn, S., Zadeh, L.A. (eds.): Feature Extraction: Foundations and Applications, Studies in Fuzziness and Soft Computing, vol. 207. Springer (2006)
13. Hastie, T., Tibshirani, R., Friedman, J.H.: The elements of statistical learning: data mining, inference, and prediction, 2nd Edition. Springer series in statistics, Springer (2009)
14. Huysmans, J., et al.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems 51(1), 141-154 (2011)
15. Kalousis, A., Prados, J., Hilario, M.: Stability of feature selection algorithms: A study on high-dimensional spaces. Knowl. Inf. Syst. 12(1), 95-116 (2007)
16. Katz, G., Shabtai, A., Rokach, L., Ofek, N.: ConfDTree: A statistical method for improving decision trees. J. Comput. Sci. Technol. 29(3), 392-407 (2014)
17. Kim, J.: Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Computational Statistics & Data Analysis 53(11), 3735-3745 (2009)
18. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI. pp. 1137-1145. Morgan Kaufmann (1995)
19. Kononenko, I., et al.: An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research 11, 1-18 (2010)
20. Li, R., Belford, G.G.: Instability of decision tree classification algorithms. In: KDD. pp. 570-575. ACM (2002)
21. Nogueira, S., Brown, G.: Measuring the stability of feature selection. In: ECML/PKDD (2). Lecture Notes in Computer Science, vol. 9852, pp. 442-457. Springer (2016)
22. Oates, T., Jensen, D.: The effects of training set size on decision tree complexity. In: Proc. of Int. Conf. on Machine Learning (ICML 1997). pp. 254-262. Morgan Kaufmann (1997)
23. Olvera-Lo´pez, J.A., Carrasco-Ochoa, J.A., Mart´ınez Trinidad, J.F., Kittler, J.: A review of instance selection methods. Artif. Intell. Rev. 34(2), 133-143 (2010)
24. Quinlan, J.R.: C4. 5: Programs for Machine Learning. Elsevier (1993)
25. Quinlan, J.R., Cameron-Jones, R.M.: FOIL: A midterm report. In: ECML. Lecture Notes in Computer Science, vol. 667, pp. 3-20. Springer (1993)
26. Ribeiro, M.T., Singh, S., Guestrin, C.: ”Why should I trust you?”: Explaining the predictions of any classifier. In: KDD. pp. 1135-1144. ACM (2016)
27. Ruggieri, S.: YaDT: Yet another decision tree builder. In: ICTAI. pp. 260-265. IEEE Computer Society (2004)
28. Schwarz, S., Pawlik, M., Augsten, N.: A new perspective on the tree edit distance. In: SISAP. Lecture Notes in Computer Science, vol. 10609, pp. 156-170. Springer (2017)
29. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pp. 267-288 (1996)
30. Tikhonov, A.: Solution of incorrectly formulated problems and the regularization method. Soviet Meth. Dokl. 4, 1035-1038 (1963)
31. Turney, P.D.: Technical note: Bias and the quantification of stability. Machine Learning 20(1- 2), 23-33 (1995)
32. Yan, X., Su, X.: Linear regression analysis: theory and computing. World Scientific (2009)
33. Yin, X., Han, J.: CPAR: classification based on predictive association rules. In: SDM. pp. 331-335. SIAM (2003)


Back to previous page
Projects (via OpenAIRE)

SoBigData
SoBigData Research Infrastructure


OpenAIRE
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:417416,
	title = {On the stability of interpretable models},
	author = {Guidotti R. and Ruggieri S.},
	doi = {10.1109/ijcnn.2019.8852158},
	booktitle = {IJCNN 2019 - International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14-19 July, 2019},
	year = {2019}
}