2023
Journal article
Open Access
Understanding any time series classifier with a subsequence-based explainer
Spinnato F, Guidotti R, Monreale A, Nanni M, Pedreschi D, Giannotti FThe growing availability of time series data has increased the usage of classifiers for this data type. Unfortunately, state-of-the-art time series classifiers are black-box models and, therefore, not usable in critical domains such as healthcare or finance, where explainability can be a crucial requirement. This paper presents a framework to explain the predictions of any black-box classifier for univariate and multivariate time series. The provided explanation is composed of three parts. First, a saliency map highlighting the most important parts of the time series for the classification. Second, an instance-based explanation exemplifies the blackbox's decision by providing a set of prototypical and counterfactual time series. Third, a factual and counterfactual rule-based explanation, revealing the reasons for the classification through logical conditions based on subsequences that must, or must not, be contained in the time series. Experiments and benchmarks show that the proposed method provides faithful, meaningful, stable, and interpretable explanations.Source: ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, vol. 18 (issue 2), pp. 1-34
DOI: 10.1145/3624480Project(s): TAILOR 
,
XAI 
,
SoBigData-PlusPlus
Metrics:
See at:
dl.acm.org
| CNR IRIS
| ISTI Repository
| ACM Transactions on Knowledge Discovery from Data
| CNR IRIS
2022
Journal article
Open Access
Explainable AI for time series classification: a review, taxonomy and research directions
Theissler A., Spinnato F., Schlegel U., Guidotti R.Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.Source: IEEE ACCESS, vol. 10, pp. 100700-100724
DOI: 10.1109/access.2022.3207765Project(s): TAILOR 
,
HumanE-AI-Net 
,
SAI: Social Explainable Artificial Intelligence 
,
XAI 
,
SoBigData-PlusPlus 
,
Social Explainable Artificial Intelligence (SAI)
Metrics:
See at:
IEEE Access
| IEEE Access
| Archivio istituzionale della Ricerca - Scuola Normale Superiore
| CNR IRIS
| ieeexplore.ieee.org
| Konstanzer Online-Publikations-System
| Software Heritage
| Archivio della Ricerca - Università di Pisa
| GitHub
| GitHub
| Archivio della Ricerca - Università di Pisa
| IRIS Cnr
| CNR IRIS
| IRIS Cnr
2023
Conference article
Open Access
Geolet: an interpretable model for trajectory classification
Landi C, Spinnato F, Guidotti R, Monreale A, Nanni MThe large and diverse availability of mobility data enables the development of predictive models capable of recognizing various types of movements. Through a variety of GPS devices, any moving entity, animal, person, or vehicle can generate spatio-temporal trajectories. This data is used to infer migration patterns, manage traffic in large cities, and monitor the spread and impact of diseases, all critical situations that necessitate a thorough understanding of the underlying problem. Researchers, businesses, and governments use mobility data to make decisions that affect people's lives in many ways, employing accurate but opaque deep learning models that are difficult to interpret from a human standpoint. To address these limitations, we propose Geolet, a human-interpretable machine-learning model for trajectory classification. We use discriminative sub-trajectories extracted from mobility data to turn trajectories into a simplified representation that can be used as input by any machine learning classifier. We test our approach against state-of-the-art competitors on real-world datasets. Geolet outperforms black-box models in terms of accuracy while being orders of magnitude faster than its interpretable competitors.DOI: 10.1007/978-3-031-30047-9_19Project(s): TAILOR 
,
XAI 
,
SoBigData-PlusPlus 
,
Humane AI
Metrics:
See at:
CNR IRIS
| link.springer.com
| ISTI Repository
| doi.org
| CNR IRIS
| CNR IRIS