2022
Journal article  Open Access

Explainable AI for time series classification: a review, taxonomy and research directions

Theissler A., Spinnato F., Schlegel U., Guidotti R.

Time series classification  TK1-9971  Temporal data analysis  Explainable artificial intelligence  Interpretable machine learning  Electrical engineering. Electronics. Nuclear engineering  interpretable machine learning  temporal data analysis  time series classification  info:eu-repo/classification/ddc/004 

Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.

Source: IEEE ACCESS, vol. 10, pp. 100700-100724


[1] C. Rudin, ``Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,'' Nature Mach. Intell., vol. 1, no. 5, pp. 206 215, May 2019.
[2] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, ``A survey of methods for explaining black box models,'' ACM Comput. Surv., vol. 51, no. 5, pp. 1 42, Sep. 2019.
[3] A. Adadi and M. Berrada, ``Peeking inside the black-box: A survey on explainable arti cial intelligence (XAI),'' IEEE Access, vol. 6, pp. 52138 52160, 2018.
[4] T. Spinner, U. Schlegel, H. Schäfer, and M. El-Assady, ``Explainer: A visual analytics framework for interactive and explainable machine learning,'' IEEE Trans. Vis. Comput. Graphics, vol. 26, no. 1, pp. 1064 1074, Jan. 2020.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, ``ImageNet: A large-scale hierarchical image database,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248 255.
[6] A. Krizhevsky and G. Hinton, ``Learning multiple layers of features from tiny images,'' Univ. Toronto, Toronto, ON, Canada, Tech. Rep., 2009.
[7] U. Schlegel, H. Arnout, M. El-Assady, D. Oelke, and D. A. Keim, ``Towards a rigorous evaluation of XAI methods on time series,'' in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), Oct. 2019, pp. 4197 4201.
[8] R. K. Mobley, An Introduction to Predictive Maintenance. Amsterdam, The Netherlands: Elsevier, 2002.
[9] A. Theissler, ``Detecting known and unknown faults in automotive systems using ensemble-based anomaly detection,'' Knowl.-Based Syst., vol. 123, pp. 163 173, May 2017.
[10] A. Maletzke, H. Lee, G. Batista, S. Rezende, R. Machado, R. Voltolini, J. Maciel, and F. Silva, ``Time series classi cation using motifs and characteristics extraction: A case study on ECG databases,'' Proc. 4th Int. Workshop Knowl. Discovery, Knowl. Manag. Decis. Support, 2013, pp. 322 329.
[11] K. J. Kim, ``Financial time series forecasting using support vector machines,'' Neurocomputing, vol. 55, nos. 1 2, pp. 307 319, Sep. 2003.
[12] F. Karim, S. Majumdar, H. Darabi, and S. Chen, ``LSTM fully convolutional networks for time series classi cation,'' IEEE Access, vol. 6, pp. 1662 1669, 2017.
[13] A. Theissler, J. Pérez-Velázquez, M. Kettelgerdes, and G. Elger, ``Predic- 1860 tive maintenance enabled by machine learning: Use cases and challenges 1861 in the automotive industry,'' Rel. Eng. Syst. Saf., vol. 215, Nov. 2021, 1862 Art. no. 107864. 1863
[14] T. Markert, S. Matich, E. Hoerner, A. Theissler, and M. Atzmueller, 1864 ``Fingertip 6-axis force/torque sensing for texture recognition in robotic 1865 manipulation,'' in Proc. 26th IEEE Int. Conf. Emerg. Technol. Factory 1866 Autom. (ETFA), Sep. 2021, pp. 1 8. 1867
[15] R. Goebel, A. Chander, K. Holzinger, F. Lecue, Z. Akata, S. Stumpf, 1868 P. Kieseberg, and A. Holzinger, ``Explainable AI: The new 42?'' in 1869 Proc. Int. Cross-Domain Conf. Mach. Learn. Knowl. Extraction. Cham, 1870 Switzerland: Springer, 2018, pp. 295 303. 1871
[16] W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, ``De ni- 1872 tions, methods, and applications in interpretable machine learning,'' Proc. 1873 Nat. Acad. Sci. USA, vol. 116, no. 44, pp. 22071 22080, Oct. 2019. 1874
[17] D. V. Carvalho, M. E. Pereira, and J. S. Cardoso, ``Machine learning 1875 interpretability: A survey on methods and metrics,'' Electronics, vol. 8, 1876 no. 8, p. 832, Jul. 2019. 1877
[18] C. Molnar, Interpretable Machine Learning. Abu Dhabi, 1878 United Arab Emirates: Lulu, 2020. 1879
[19] W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K.-R. Müller, 1880 Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 1881 vol. 11700. Cham, Switzerland: Springer, 2019. 1882
[20] D. Martens, B. Baesens, T. Van Gestel, and J. Vanthienen, ``Compre- 1883 hensible credit scoring models using rule extraction from support vector 1884 machines,'' Eur. J. Oper. Res., vol. 183, no. 3, pp. 1466 1476, 2007. 1885
[21] F. K. Dosilovic, M. Br£i¢, and N. Hlupi¢, ``Explainable arti cial intelli- 1886 gence: A survey,'' in Proc. MIPRO, 2018, pp. 210 215. 1887
[22] T. Miller, ``Explanation in arti cial intelligence: Insights from the social 1888 sciences,'' Artif. Intell., vol. 267, pp. 1 38, Feb. 2019. 1889
[23] R. M. J. Byrne, ``Counterfactuals in explainable arti cial intelligence 1890 (XAI): Evidence from human reasoning,'' in Proc. IJCAI, S. Kraus, Ed., 1891 2019, pp. 6276 6282. 1892
[24] Y. Zhang and X. Chen, ``Explainable recommendation: A survey and new 1893 perspectives,'' Found. Trends Inf. Retr., vol. 14, no. 1, pp. 1 101, 2020. 1894
[25] S. Anjomshoae, A. Najjar, D. Calvaresi, and K. Främling, ``Explainable 1895 agents and robots: Results from a systematic literature review,'' in Proc. 1896 AAMAS. Richland, SC, USA: International Foundation for Autonomous 1897 Agents and Multiagent Systems, 2019, pp. 1078 1088. 1898
[26] E. Tjoa and C. Guan, ``A survey on explainable arti cial intelligence 1899 (XAI): Towards medical XAI,'' CoRR, vol. abs/1907.07374, pp. 1 22, 1900 Jul. 2019. 1901
[27] S. Vollert, M. Atzmueller, and A. Theissler, ``Interpretable machine learn- 1902 ing: A brief survey from the predictive maintenance perspective,'' in Proc. 1903 26th IEEE Int. Conf. Emerg. Technol. Factory Autom. (ETFA ), Sep. 2021, 1904 pp. 01 08. 1905
[28] A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, ``The great 1906 time series classi cation bake off: A review and experimental evaluation 1907 of recent algorithmic advances,'' Data Mining Knowl. Discovery, vol. 31, 1908 no. 3, pp. 606 660, May 2017. 1909
[29] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Müller, 1910 ``Deep learning for time series classi cation: A review,'' Data Mining 1911 Knowl. Discovery, vol. 33, no. 4, pp. 917 963, Mar. 2019. 1912
[30] A. Ruiz, M. Flynn, J. Large, M. Middlehurst, and A. Bagnall, ``The great 1913 multivariate time series classi cation bake off: A review and experimental 1914 evaluation of recent algorithmic advances,'' Data Mining Knowl. Discov- 1915 ery, vol. 35, pp. 401 449, Mar. 2021. 1916
[31] L. Sadouk, ``CNN approaches for time series classi cation,'' in Time 1917 Series Analysis: Data, Methods, and Applications. London, U.K.: 1918 IntechOpen, 2019, pp. 1 23. 1919
[32] A. Abanda, U. Mori, and J. A. Lozano, ``A review on distance based 1920 time series classi cation,'' Data Mining Knowl. Discovery, vol. 33, no. 2, 1921 pp. 378 412, 2019. 1922
[33] P. Tormene, T. Giorgino, S. Quaglini, and M. Stefanelli, ``Matching 1923 incomplete time series with dynamic time warping: An algorithm and 1924 an application to post-stroke rehabilitation,'' Artif. Intell. Med., vol. 45, 1925 no. 1, pp. 11 34, 2009. 1926
[34] T. Rojat, R. Puget, D. Filliat, J. D. Ser, R. Gelin, and N. Díaz-Rodríguez, 1927 ``Explainable arti cial intelligence (XAI) on timeseries data: A survey,'' 1928 CoRR, vol. abs/2104.00950, pp. 1 14, Apr. 2021. 1929
[35] I. Simic, V. Sabol, and E. Eduardo Veas, ``XAI methods for neural 1930 time series classi cation: A brief review,'' CoRR, vol. abs/2108.08009, 1931 pp. 1 8, Aug. 2021. 1932
[36] M. T. Ribeiro, S. Singh, and C. Guestrin, ```Why should I trust you?' Explaining the predictions of any classi er,'' in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2016, pp. 1135 1144.
[37] S. M. Lundberg and S.-I. Lee, ``A uni ed approach to interpreting model predictions,'' in Proc. 31st Int. Conf. Neural Inf. Process. Syst., 2017, pp. 4768 4777.
[38] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, ``Grad-CAM: Visual explanations from deep networks via gradient-based localization,'' in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 618 626.
[39] A. Shrikumar, P. Greenside, and A. Kundaje, ``Learning important features through propagating activation differences,'' in Proc. 34th Int. Conf. Mach. Learn. (ICML), vol. 70, 2017, pp. 3145 3153.
[40] A. Bostrom and A. Bagnall, ``A shapelet transform for multivariate time series classi cation,'' Dec. 2017, arXiv:1712.06428.
[41] A. A. Freitas, ``Comprehensible classi cation models: A position paper,'' SIGKDD Explor., vol. 15, no. 1, pp. 1 10, 2013.
[42] F. Doshi-Velez and B. Kim, ``Towards a rigorous science of interpretable machine learning,'' 2017, arXiv:1702.08608.
[43] L. Longo, R. Goebel, F. Lécué, P. Kieseberg, and A. Holzinger, ``Explainable arti cial intelligence: Concepts, applications, research challenges and visions,'' in Proc. CD-MAKE, in Lecture Notes in Computer Science, vol. 12279. Cham, Switzerland: Springer, 2020, pp. 1 16.
[44] A. Brennen, ``What do people really want when they say they want `explainable AI?' We asked 60 stakeholders,'' in Proc. Extended Abstr. CHI Conf. Hum. Factors Comput. Syst., New York, NY, USA, 2020, pp. 1 7.
[45] D. Gunning and D. Aha, ``DARPA's explainable arti cial intelligence (XAI) program,'' AI Mag., vol. 40, no. 2, pp. 44 58, 2019.
[46] S. Barocas, S. Friedler, M. Hardt, J. Kroll, S. Venka-Tasubramanian, and H. Wallach. The FAT-ML Workshop Series on Fairness, Accountability, and Transparency in Machine Learning. Accessed: Jun. 6, 2018. [Online]. Available: http://www.fatml.org/
[47] O. Biran and V. C. Cotton, ``Explanation and justi cation in machine learning: A survey,'' in Proc. IJCAI Workshop Explainable Artif. Intell. (XAI), 2017, pp. 8 13.
[48] A. B. Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera, ``Explainable arti cial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,'' Inf. Fusion, vol. 58, pp. 82 115, Jun. 2020.
[49] B. Kovalerchuk, M. A. Ahmad, and A. Teredesai, ``Survey of explainable machine learning with visual and granular methods beyond quasiexplanations,'' in Interpretable Arti cial Intelligence: A Perspective of Granular Computing. Cham, Switzerland: Springer, 2021, pp. 217 267.
[50] Z. C. Lipton, ``The mythos of model interpretability,'' Queue, vol. 16, no. 3, pp. 31 57, 2018.
[51] H. Snyder, ``Literature review as a research methodology: An overview and guidelines,'' J. Bus. Res., vol. 104, pp. 333 339, Nov. 2019.
[52] A. H. Gee, D. Garcia-Olano, J. Ghosh, and D. Paydarfar, ``Explaining deep classi cation of time-series data with learned prototypes,'' in Proc. CEUR Workshop, vol. 2429. Aachen, Germany: NIH Public Access, 2019, p. 15.
[53] M. Sundararajan, A. Taly, and Q. Yan, ``Axiomatic attribution for deep networks,'' in Proc. 34th Int. Conf. Mach. Learn., vol. 70, 2017, pp. 3319 3328.
[54] Z. Wang, W. Yan, and T. Oates, ``Time series classi cation from scratch with deep neural networks: A strong baseline,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN), May 2017, pp. 1578 1585.
[55] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, ``On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation,'' PLoS ONE, vol. 10, no. 7, Jul. 2015, Art. no. e0130140.
[56] J. Zhang, L. Zhe Lin, J. Brandt, X. Shen, and S. Sclaroff, ``Top-down neural attention by excitation backprop,'' in Computer Vision ECCV 2016 (Lecture Notes in Computer Science), vol. 9908. Amsterdam, The Netherlands: Springer, Oct. 2016, pp. 543 559.
[57] D. Matthew Zeiler and R. Fergus, ``Visualizing and understanding convolutional networks,'' in Computer Vision ECCV 2014 (Lecture Notes in Computer Science), vol. 8689, D. J. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Zürich, Switzerland: Springer, Sep. 2014, pp. 818 833.
[58] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, ``Smooth- 2006 Grad: Removing noise by adding noise,'' 2017, arXiv:1706.03825. 2007
[59] L. Zhou, C. Ma, X. Shi, D. Zhang, W. Li, and L. Wu, ``Salience-CAM: 2008 Visual explanations from convolutional neural networks via salience 2009 score,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2021, pp. 1 8. 2010
[60] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and 2011 D. Batra, ``Grad-cam: Visual explanations from deep networks via 2012 gradient-based localization,'' Int. J. Comput. Vis., vol. 128, no. 2, 2013 pp. 336 359, Oct. 2019. 2014
[61] S. A. Siddiqui, D. Mercier, M. Munir, A. Dengel, and S. Ahmed, ``TSViz: 2015 Demysti cation of deep learning models for time-series analysis,'' IEEE 2016 Access, vol. 7, pp. 67027 67040, 2019. 2017
[62] M. Munir, S. A. Siddiqui, F. Küsters, D. Mercier, A. Dengel, and 2018 S. Ahmed, ``TSXplain: Demysti cation of DNN decisions for time-series 2019 using natural language and statistical features,'' in Proc. Int. Conf. Artif. 2020 Neural Netw., in Lecture Notes in Computer Science, 2019, pp. 426 439. 2021
[63] S. A. Siddiqui, D. Mercier, A. Dengel, and S. Ahmed, ``TSInsight: A 2022 local-global attribution framework for interpretability in time series data,'' 2023 Sensors, vol. 21, no. 21, p. 7373, Nov. 2021. 2024
[64] S. Mishra, B. L. Sturm, and S. Dixon, ``Local interpretable model- 2025 agnostic explanations for music content analysis,'' in Proc. ISMIR, 2017, 2026 pp. 537 543. 2027
[65] R. Assaf, I. Giurgiu, F. Bagehorn, and A. Schumann, ``MTEX-CNN: Mul- 2028 tivariate time series explanations for predictions with convolutional neural 2029 networks,'' in Proc. IEEE Int. Conf. Data Mining (ICDM), Nov. 2019, 2030 pp. 952 957. 2031
[66] P. S. Parvatharaju, R. Doddaiah, T. Hartvigsen, and E. A. Rundensteiner, 2032 ``Learning saliency maps to explain deep time series classi ers,'' in Proc. 2033 30th ACM Int. Conf. Inf. Knowl. Manag. New York, NY, USA: Associa- 2034 tion for Computing Machinery, 2021, pp. 1406 1415. 2035
[67] S. Tonekaboni, S. Joshi, K. Campbell, D. K. Duvenaud, and 2036 A. Goldenberg, ``What went wrong and when? Instance-wise feature 2037 importance for time-series black-box models,'' in Proc. Adv. Neural Inf. 2038 Process. Syst., vol. 33. Red Hook, NY, USA: Curran Associates, 2020, 2039 pp. 799 809. 2040
[68] C. Rooke, J. Smith, K. Leung, M. Volkovs, and S. Zuberi, ``Temporal 2041 dependencies in feature importance for time series predictions,'' in Proc. 2042 ICML, 2021, pp. 1 7. 2043
[69] B. M. Maweu, S. Dakshit, R. Shamsuddin, and B. Prabhakaran, ``CEFEs: 2044 A CNN explainable framework for ECG signals,'' Artif. Intell. Med., 2045 vol. 115, May 2021, Art. no. 102059. 2046
[70] X. Bi, C. Zhang, Y. He, X. Zhao, Y. Sun, and Y. Ma, ``Explainable time 2047 frequency convolutional neural network for microseismic waveform clas- 2048 si cation,'' Inf. Sci., vol. 546, pp. 883 896, Feb. 2021. 2049
[71] M. Guillemé, V. Masson, L. Rozé, and A. Termier, ``Agnostic local expla- 2050 nation for time series classi cation,'' in Proc. IEEE 31st Int. Conf. Tools 2051 Artif. Intell. (ICTAI), Nov. 2019, pp. 432 439. 2052
[72] S. Lin and G. C. Runger, ``GCRNN: Group-constrained convolutional 2053 recurrent neural network,'' IEEE Trans. Neural Netw. Learn. Syst., 2054 vol. 29, no. 10, pp. 4709 4718, Oct. 2018. 2055
[73] P. Vinayavekhin, S. Chaudhury, A. Munawar, D. J. Agravante, 2056 G. De Magistris, D. Kimura, and R. Tachibana, ``Focusing on what is rel- 2057 evant: Time-series learning and understanding using attention,'' in Proc. 2058 24th Int. Conf. Pattern Recognit. (ICPR). Beijing, China: IEEE Computer 2059 Society, Aug. 2018, pp. 2624 2629. 2060
[74] E.-Y. Hsu, C.-L. Liu, and V. S. Tseng, ``Multivariate time series early clas- 2061 si cation with interpretability using deep learning and attention mech- 2062 anism,'' in Proc. Paci c Asia Conf. Knowl. Discovery Data Mining. 2063 Cham, Switzerland: Springer, Mar. 2019, pp. 541 553. 2064
[75] B. Hosseini, R. Montagné, and B. Hammer, ``Deep-aligned convolutional 2065 neural network for skeleton-based action recognition and segmentation,'' 2066 Data Sci. Eng., vol. 5, no. 2, pp. 126 139, 2020. 2067
[76] T.-Y. Hsieh, S. Wang, Y. Sun, and V. Honavar, ``Explainable multivariate 2068 time series classi cation: A deep neural network which learns to attend 2069 to important variables as well as time intervals,'' in Proc. 14th ACM Int. 2070 Conf. Web Search Data Mining. New York, NY, USA: Association for 2071 Computing Machinery, 2021, pp. 607 615. 2072
[77] T. Dang, H. Van, H. Nguyen, V. Pham, and R. Hewett, ``DeepVix: 2073 Explaining long short-term memory network with high dimensional time 2074 series data,'' in Proc. 11th Int. Conf. Adv. Inf. Technol. (IAIT). New York, 2075 NY, USA: Association for Computing Machinery, Jul. 2020, pp. 1 10. 2076
[78] T. Dang, H. N. Nguyen, and N. V. T. Nguyen, ``VixLSTM: Visual explain- 2077 able LSTM for multivariate time series,'' in Proc. 12th Int. Conf. Adv. 2078 Inf. Technol. (IAIT). New York, NY, USA: Association for Computing 2079 Machinery, 2021, pp. 1 5. 2080
[79] L. Schwenke and M. Atzmueller, ``Show me what you're looking for: Visualizing abstracted transformer attention for enhancing their local interpretability on time series data,'' in Proc. Int. FLAIRS Conf., vol. 34, 2021, pp. 1 6.
[80] P. Senin and S. Malinchik, ``SAX-VSM: Interpretable time series classi cation using SAX and vector space model,'' in Proc. IEEE 13th Int. Conf. Data Mining, Dec. 2013, pp. 1175 1180.
[81] W. Song, L. Liu, M. Liu, W. Wang, X. Wang, and Y. Song, ``Representation learning with deconvolution for multivariate time series classi cation and visualization,'' in Proc. Int. Conf. Pioneering Comput. Scientists, Eng. Educators. Singapore: Springer, 2020, pp. 310 326.
[82] T. L. Nguyen, S. Gsponer, I. Ilie, M. O'Reilly, and G. Ifrim, ``Interpretable time series classi cation using linear models and multi-resolution multidomain symbolic representations,'' Data Mining Knowl. Discovery, vol. 33, no. 4, pp. 1183 1222, Jul. 2019.
[83] S. Cho, W. Chang, G. Lee, and J. Choi, ``Interpreting internal activation patterns in deep temporal neural networks by nding prototypes,'' in Proc. 27th ACM SIGKDD Conf. Knowl. Discovery Data Mining (KDD). New York, NY, USA: Association for Computing Machinery, Aug. 2021, pp. 158 166.
[84] L. Ye and E. Keogh, ``Time series shapelets: A novel technique that allows accurate, interpretable and fast classi cation,'' Data Mining Knowl. Discovery, vol. 22, nos. 1 2, pp. 149 182, 2011.
[85] J. Lines, L. M. Davis, J. Hills, and A. Bagnall, ``A shapelet transform for time series classi cation,'' in Proc. 18th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining (KDD). New York, NY, USA: Association for Computing Machinery, 2012, pp. 289 297.
[86] M. Ghalwash and Z. Obradovic, ``Early classi cation of multivariate temporal observations by extraction of interpretable shapelets,'' BMC Bioinf., vol. 13, p. 195, Aug. 2012.
[87] A. Mueen, E. Keogh, and N. Young, ``Logical-shapelets: An expressive primitive for time series classi cation,'' in Proc. 17th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining (KDD). New York, NY, USA: Association for Computing Machinery, 2011, pp. 1154 1162.
[88] J. Grabocka, N. Schilling, M. Wistuba, and L. Schmidt-Thieme, ``Learning time-series shapelets,'' in Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining. New York, NY, USA: Association for Computing Machinery, Aug. 2014, pp. 392 401.
[89] Y. Yang, Q. Deng, F. Shen, J. Zhao, and C. Luo, ``A shapelet learning method for time series classi cation,'' in Proc. IEEE 28th Int. Conf. Tools Artif. Intell. (ICTAI), Nov. 2016, pp. 423 430.
[90] Z. Fang, P. Wang, and W. Wang, ``Ef cient learning interpretable shapelets for accurate time series classi cation,'' in Proc. IEEE 34th Int. Conf. Data Eng. (ICDE), Apr. 2018, pp. 497 508.
[91] H. Deng, W. Chen, A. J. Ma, Q. Shen, P. C. Yuen, and G. Feng, ``Robust shapelets learning: Transform-invariant prototypes,'' in Pattern Recognition and Computer Vision, J.-H. Lai, C.-L. Liu, X. Chen, J. Zhou, T. Tan, N. Zheng, and H. Zha, Eds. Cham, Switzerland: Springer, 2018, pp. 491 502.
[92] M. Guilleme, S. Malinowski, R. Tavenard, and X. Renard, ``Localized random shapelets,'' in Proc. Int. Workshop Adv. Anal. Learn. Temporal Data, Würzburg, Germany, 2019, pp. 85 97.
[93] Q. Ma, W. Zhuang, S. Li, D. Huang, and G. Cottrell, ``Adversarial dynamic shapelet networks,'' in Proc. AAAI Conf. Artif. Intell., Apr. 2020, vol. 34, no. 4, pp. 5069 5076.
[94] Y. Wang, R. Emonet, E. Fromont, S. Malinowski, and R. Tavenard, ``Adversarial regularization for Explainable-by-Design time series classi cation,'' in Proc. IEEE 32nd Int. Conf. Tools Artif. Intell. (ICTAI), Nov. 2020, pp. 1079 1087.
[95] G. Vandewiele, F. Ongenae, and F. De Turck, ``GENDIS: Genetic discovery of shapelets,'' Sensors, vol. 21, no. 4, p. 1059, Feb. 2021.
[96] R. Medico, J. Ruyssinck, D. Deschrijver, and T. Dhaene, ``Learning multivariate shapelets with multi-layer neural networks for interpretable time-series classi cation,'' Adv. Data Anal. Classi cation, vol. 15, no. 4, pp. 911 936, Dec. 2021.
[97] R. Guidotti and M. D'Onofrio, ``Matrix pro le-based interpretable time series classi er,'' Frontiers Artif. Intell., vol. 4, Oct. 2021, Art. no. 699448.
[98] R. Guidotti and A. Monreale, ``Designing shapelets for interpretable data-agnostic classi cation,'' in Proc. AAAI/ACM Conf. AI, Ethics, Soc. (AIES). New York, NY, USA: Association for Computing Machinery, Jul. 2021, pp. 532 542.
[99] Y. Hu, P. Zhan, Y. Xu, J. Zhao, Y. Li, and X. Li, ``Temporal representation 2153 learning for time series classi cation,'' Neural Comput. Appl., vol. 33, 2154 no. 8, pp. 3169 3182, Apr. 2021. 2155
[100] R. Guidotti, A. Monreale, F. Spinnato, D. Pedreschi, and F. Giannotti, 2156 ``Explaining any time series classi er,'' in Proc. IEEE 2nd Int. Conf. 2157 Cognit. Mach. Intell. (CogMI), Oct. 2020, pp. 167 176. 2158
[101] D. Mercier, A. Dengel, and S. Ahmed, ``PatchX: Explaining deep models 2159 by intelligible pattern patches for time-series classi cation,'' in Proc. Int. 2160 Joint Conf. Neural Netw. (IJCNN), Jul. 2021, pp. 1 8. 2161
[102] D. Mercier, A. Dengel, and S. Ahmed, ``P2ExNet: Patch-based prototype 2162 explanation network,'' in Proc. Int. Conf. Neural Inf. Process. Cham, 2163 Switzerland: Springer, 2020, pp. 318 330. 2164
[103] S. Das, P. Xu, Z. Dai, A. Endert, and L. Ren, ``Interpreting deep neural 2165 networks through prototype factorization,'' in Proc. Int. Conf. Data Min- 2166 ing Workshops (ICDMW), Nov. 2020, pp. 448 457. 2167
[104] J. Wang, Z. Wang, J. Li, and J. Wu, ``Multilevel wavelet decompo- 2168 sition network for interpretable time series analysis,'' in Proc. 24th 2169 ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Jul. 2018, 2170 pp. 2437 2446. 2171
[105] Y. Ming, P. Xu, H. Qu, and L. Ren, ``Interpretable and steerable sequence 2172 learning via prototypes,'' in Proc. 25th ACM SIGKDD Int. Conf. Knowl. 2173 Discovery Data Mining, Jul. 2019, pp. 903 913. 2174
[106] W. Tang, L. Liu, and G. Long, ``Interpretable time-series classi cation 2175 on few-shot samples,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2176 Jul. 2020, pp. 1 8. 2177
[107] X. Zhang, Y. Gao, J. Lin, and C.-T. Lu, ``TapNet: Multivariate time series 2178 classi cation with attentional prototypical network,'' in Proc. AAAI Conf. 2179 Artif. Intell., vol. 34, 2020, pp. 6845 6852. 2180
[108] D. Gay, R. Guigourès, M. Boullé, and F. Clérot, ``Fea- 2181 ture extraction over multiple representations for time series 2182 classi cation,'' in Proc. Int. Workshop New Frontiers Min- 2183 ing Complex Patterns. Cham, Switzerland: Springer, 2013, 2184 pp. 18 34. 2185
[109] V. Shalaeva, S. Alkhoury, J. Marinescu, C. Amblard, and G. Bisson, 2186 ``Multi-operator decision trees for explainable time-series classi ca- 2187 tion,'' in Information Processing and Management of Uncertainty in 2188 Knowledge-Based Systems. Theory and Foundations. Cham, Switzerland: 2189 Springer, Jan. 2018, pp. 86 99. 2190
[110] H. Ito and B. Chakraborty, ``A proposal for shape aware feature 2191 extraction for time series classi cation,'' in Proc. iCAST, Oct. 2019, 2192 pp. 1 6. 2193
[111] F. Küsters, P. Schichtel, S. Ahmed, and A. Dengel, ``Conceptual expla- 2194 nations of neural network prediction for time series,'' in Proc. Int. Joint 2195 Conf. Neural Netw. (IJCNN), Jul. 2020, pp. 1 6. 2196
[112] M. Zaman and A. Hassan, ``Fuzzy heuristics and decision tree for clas- 2197 si cation of statistical feature-based control chart patterns,'' Symmetry, 2198 vol. 13, no. 1, p. 110, Jan. 2021. 2199
[113] E. Delaney, D. Greene, and M. T. Keane, ``Instance-based counterfactual 2200 explanations for time series classi cation,'' in Case-Based Reasoning 2201 Research and Development, A. A. Sánchez-Ruiz and M. W. Floyd, Eds. 2202 Cham, Switzerland: Springer, 2021, pp. 32 47. 2203
[114] I. Karlsson, J. Rebane, P. Papapetrou, and A. Gionis, ``Locally and glob- 2204 ally explainable time series tweaking,'' Knowl. Inf. Syst., vol. 62, no. 5, 2205 pp. 1671 1700, May 2020. 2206
[115] E. Ates, B. Aksar, V. J. Leung, and A. K. Coskun, ``Counterfactual expla- 2207 nations for multivariate time series,'' in Proc. Int. Conf. Appl. Artif. Intell. 2208 (ICAPAI), May 2021, pp. 1 8. 2209
[116] J. Labaien, E. Zugasti, and X. D. Carlos, ``Contrastive explanations for 2210 a deep learning model on time-series data,'' in Proc. DaWaK, 2020, 2211 pp. 235 244. 2212
[117] Y. Okajima and K. Sadamasa, ``Deep neural networks constrained by 2213 decision rules,'' in Proc. AAAI. Palo Alto, CA, USA: AAAI Press, 2019, 2214 pp. 2496 2505. 2215
[118] H. Huang, C. Xu, S. Yoo, W. Yan, T. Wang, and F. Xue, ``Imbalanced 2216 time series classi cation for ight data analyzing with nonlinear Granger 2217 causality learning,'' in Proc. 29th ACM Int. Conf. Inf. Knowl. Manag., 2218 Oct. 2020, pp. 2533 2540. 2219
[119] S. Mohammadinejad, J. V. Deshmukh, A. G. Puranic, 2220 M. Vazquez-Chanlatte, and A. Donzé, ``Interpretable classi cation 2221 of time-series data using ef cient enumerative techniques,'' in Proc. 2222 23rd Int. Conf. Hybrid Syst., Comput. Control, Apr. 2020, pp. 1 10. 2223
[120] U. Schlegel and D. A. Keim, ``Time series model attribution visualiza- 2224 tions as explanations,'' in Proc. IEEE Workshop Trust Expertise Vis. Anal. 2225 (TREX), Oct. 2021, pp. 27 31. 2226
[143] M. Shah, J. Grabocka, N. Schilling, M. Wistuba, and L. Schmidt-Thieme, 2300 ``Learning DTW-shapelets for time-series classi cation,'' in Proc. 3rd 2301 IKDD Conf. Data Sci. New York, NY, USA: Association for Computing 2302 Machinery, Mar. 2016, pp. 1 8. 2303
[144] C.-C.-M. Yeh, Y. Zhu, L. Ulanova, N. Begum, Y. Ding, H. A. Dau, 2304 D. F. Silva, A. Mueen, and E. Keogh, ``Matrix pro le I: All pairs similar- 2305 ity joins for time series: A unifying view that includes motifs, discords and 2306 shapelets,'' in Proc. IEEE 16th Int. Conf. Data Mining (ICDM). Washing- 2307 ton, DC, USA: IEEE Computer Society, Dec. 2016, pp. 1317 1322. 2308
[145] L. van der Maaten and G. Hinton, ``Visualizing data using t-SNE,'' 2309 J. Mach. Learn. Res., vol. 9, no. 11, pp. 2579 2605, 2008. 2310
[146] I. Karlsson, P. Papapetrou, and H. Boström, ``Generalized random 2311 shapelet forests,'' Data Mining Knowl. Discovery, vol. 30, no. 5, 2312 pp. 1053 1085, Sep. 2016. 2313
[147] A. Dhurandhar, P.-Y. Chen, R. Luss, C.-C. Tu, P. Ting, K. Shanmugam, 2314 and P. Das, ``Explanations based on the missing: Towards contrastive 2315 explanations with pertinent negatives,'' in Proc. Adv. Neural Inf. Process. 2316 Syst., vol. 31, 2018, pp. 1 12. 2317
[148] S. Wachter, B. Mittelstadt, and C. Russell, ``Counterfactual explanations 2318 without opening the black box: Automated decisions and the GDPR,'' 2319 Harv. J. Law Technol., vol. 31, no. 2, p. 841, 2018. 2320
[149] A. Kampouraki, G. Manis, and C. Nikou, ``Heartbeat time series classi - 2321 cation with support vector machines,'' IEEE Trans. Inf. Technol. Biomed., 2322 vol. 13, no. 4, pp. 512 518, Jul. 2009. 2323
[150] W. A. Chaovalitwongse, O. A. Prokopyev, and P. M. Pardalos, 2324 ``Electroencephalogram (EEG) time series classi cation: Applica- 2325 tions in epilepsy,'' Ann. Oper. Res., vol. 148, no. 1, pp. 227 250, 2326 Nov. 2006. 2327
[151] J. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, ``Deep 2328 convolutional neural networks on multichannel time series for human 2329 activity recognition,'' in Proc. 24th Int. Joint Conf. Artif. Intell., 2015, 2330 pp. 1 7. 2331
[152] P. Geurts, ``Pattern extraction for time series classi cation,'' in Proc. Eur. 2332 Conf. Princ. Data Mining Knowl. Discovery. Berlin, Germany: Springer, 2333 2001, pp. 115 127. 2334
[153] P. Li, M. Abdel-Aty, and J. Yuan, ``Real-time crash risk prediction on 2335 arterials based on LSTM-CNN,'' Accident Anal. Prevention, vol. 135, 2336 Feb. 2020, Art. no. 105371. 2337
[154] G. Ahn, H. Lee, J. Park, and S. Hur, ``Development of indicator of 2338 data suf ciency for feature-based early time series classi cation with 2339 applications of bearing fault diagnosis,'' Processes, vol. 8, no. 7, p. 790, 2340 Jul. 2020. 2341
[155] M. S. Kim, J. P. Yun, and P. Park, ``An explainable convolutional neural 2342 network for fault diagnosis in linear motion guide,'' IEEE Trans. Ind. 2343 Informat., vol. 17, no. 6, pp. 4036 4045, Jun. 2021. 2344
[156] X. Nie and G. Xie, ``A novel normalized recurrent neural network for 2345 fault diagnosis with noisy labels,'' J. Intell. Manuf., vol. 32, no. 5, 2346 pp. 1271 1288, Jun. 2021. 2347
[157] P. Ivaturi, M. Gadaleta, A. C. Pandey, M. Pazzani, S. R. Steinhubl, and 2348 G. Quer, ``A comprehensive explanation framework for biomedical time 2349 series classi cation,'' IEEE J. Biomed. Health Informat., vol. 25, no. 7, 2350 pp. 2398 2408, Jul. 2021. 2351
[158] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Müller, 2352 ``Accurate and interpretable evaluation of surgical skills from kinematic 2353 data using fully convolutional neural networks,'' Int. J. Comput. Assist. 2354 Radiol. Surg., vol. 14, no. 9, pp. 1611 1617, Jul. 2019. 2355
[159] X. Zhang, L. Yao, M. Dong, Z. Liu, Y. Zhang, and Y. Li, ``Adver- 2356 sarial representation learning for robust patient-independent epileptic 2357 seizure detection,'' IEEE J. Biomed. Health Informat., vol. 24, no. 10, 2358 pp. 2852 2859, Oct. 2020. 2359
[160] T. Dissanayake, T. Fernando, S. Denman, S. Sridharan, and C. Fookes, 2360 ``Deep learning for patient-independent epileptic seizure prediction using 2361 scalp EEG signals,'' IEEE Sensors J., vol. 21, no. 7, pp. 9377 9388, 2362 Apr. 2021. 2363
[161] M. El-Assady, W. Jentner, R. Kehlbeck, U. Schlegel, R. Sevastjanova, 2364 F. Sperrle, T. Spinner, and D. Keim, ``Towards XAI: Structuring the pro- 2365 cesses of explanations,'' in Proc. ACM Workshop Hum.-Centered Mach. 2366 Learn., 2019, pp. 1 12. 2367
[162] D. Mercier, J. Bhatt, A. Dengel, and S. Ahmed, ``Time to focus: A 2368 comprehensive benchmark using time series attribution methods,'' 2022, 2369 arXiv:2202.03759. 2370
[163] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim, 2371 ``Sanity checks for saliency maps,'' in Proc. Adv. Neural Inf. Process. 2372 Syst., vol. 31, 2018, pp. 1 11. 2373 2374 [164] A. A. Ismail, M. Gunady, H. C. Bravo, and S. Feizi, ``Benchmarking deep 2375 learning interpretability in time series predictions,'' in Proc. Adv. Neural 2376 Inf. Process. Syst., vol. 33, 2020, pp. 6441 6452,.
2377 [165] T. T. Nguyen, T. L. Nguyen, and G. Ifrim, ``A model-agnostic approach 2378 to quantifying the informativeness of explanation methods for time series 2379 classi cation,'' in Proc. Int. Workshop Adv. Anal. Learn. Temporal Data.
2380 Cham, Switzerland: Springer, 2020, pp. 77 94.
2381 [166] U. Schlegel, D. Oelke, D. A. Keim, and M. El-Assady, ``An empirical 2382 study of explainable AI techniques on deep learning models for time 2383 series tasks,'' in Proc. NeurIPS Workshops, 2020.
2384 [167] C.-K. Yeh, C.-Y. Hsieh, A. Suggala, D. I. Inouye, and P. K. Ravikumar, 2385 ``On the (in) delity and sensitivity of explanations,'' in Proc. Adv. Neural 2386 Inf. Process. Syst., vol. 32, 2019, pp. 1 12.
2387 [168] S. Hooker, D. Erhan, P.-J. Kindermans, and B. Kim, ``A benchmark for 2388 interpretability methods in deep neural networks,'' in Proc. Adv. Neural 2389 Inf. Process. Syst., vol. 32, 2019, pp. 1 12.
2390 [169] N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, 2391 J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya, S. Yan, and 2392 O. Reblitz-Richardson, ``Captum: A uni ed and generic model inter2393 pretability library for PyTorch,'' 2020, arXiv:2009.07896.
2394 [170] J. Gildenblat. (2021). PyTorch Library for CAM Methods. [Online]. Avail2395 able: https://github.com/jacobgil/pytorch-grad-cam 2396 [171] M. Löning, A. Bagnall, S. Ganesh, V. Kazakov, J. Lines, and F. J. Király, 2397 ``Sktime: A uni ed interface for machine learning with time series,'' 2398 2019, arXiv:1909.07872.
2399 [172] R. Tavenard, J. Faouzi, G. Vandewiele, F. Divo, G. Androz, C. Holtz, 2400 M. Payne, R. Yurchak, M. Ruÿwurm, K. Kolar, and E. Woods, ``Tslearn, 2401 a machine learning toolkit for time series data,'' J. Mach. Learn. Res., 2402 vol. 21, no. 118, pp. 1 6, 2020.
2403 [173] J. Faouzi and H. Janati, ``pyts: A Python package for time series classi - 2404 cation,'' J. Mach. Learn. Res., vol. 21, no. 46, pp. 1 6, 2020.
2405 [174] R. Sevastjanova, F. Beck, B. Ell, C. Turkay, R. Henkin, M. Butt, 2406 D. Keim, and M. El-Assady, ``Going beyond visualization. Verbaliza2407 tion as complementary medium to explain machine learning models,'' in 2408 Proc. Workshop Vis. AI Explainability (VIS), 2018. [Online]. Available: 2409 https://openaccess.city.ac.uk/id/eprint/21848/ 2410 [175] Q. Vera Liao and K. R. Varshney, ``Human-centered explainable AI 2411 (XAI): From algorithms to user experiences,'' 2021, arXiv:2110.10790.
2412 [176] A. Holzinger, C. Biemann, C. S. Pattichis, and D. B. Kell, ``What do we 2413 need to build explainable AI systems for the medical domain?'' 2017, 2414 arXiv:1712.09923.
2415 [177] C. Panigutti, A. Perotti, and D. Pedreschi, ``Doctor XAI: An ontology2416 based approach to black-box sequential data classi cation explana2417 tions,'' in Proc. Conf. Fairness, Accountability, Transparency, Jan. 2020, 2418 pp. 629 639.
2419 [178] J. N. Paredes, J. Carlos L. Teze, G. I. Simari, and M. V. Martinez, ``On the 2420 importance of domain-speci c explanations in AI-based cybersecurity 2421 systems (technical report),'' 2021, arXiv:2108.02006.
2422 [179] H. A. Dau, E. Keogh, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, 2423 C. A. Ratanamahatana, Y. Chen, B. Hu, N. Begum, A. Bag2424 nall, A. Mueen, G. Batista, and M. L. Hexagon. (Oct. 2018).
2425 The UCR Time Series Classi cation Archive. [Online]. Available: 2426 https://www.cs.ucr.edu/~eamonn/time_series_data_2018/ 2427 [180] M. K. Belaid, E. Hüllermeier, M. Rabus, and R. Krestel, ``Do we need 2428 another explainable AI method? Toward unifying post-hoc XAI eval2429 uation methods into an interactive and multi-dimensional benchmark,'' 2430 2022, arXiv:2207.14160.
2431 [181] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, 2432 ``The Caltech-UCSD Birds-200-2011 dataset,'' California Inst. Technol., 2433 Pasadena, CA, USA, Tech. Rep. CNS-TR-2011-001, 2011.
2434 [182] L. Arras, A. Osman, and W. Samek, ``CLEVR-XAI: A benchmark dataset 2435 for the ground truth evaluation of neural network explanations,'' Inf.
2436 Fusion, vol. 81, pp. 14 40, May 2022.

Metrics



Back to previous page
BibTeX entry
@article{oai:it.cnr:prodotti:482061,
	title = {Explainable AI for time series classification: a review, taxonomy and research directions},
	author = {Theissler A. and Spinnato F. and Schlegel U. and Guidotti R.},
	doi = {10.1109/access.2022.3207765},
	year = {2022}
}

TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization

HumanE-AI-Net
HumanE AI Network

SAI: Social Explainable Artificial Intelligence
XAI
Science and technology for the explanation of AI decision making

SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics

Social Explainable Artificial Intelligence (SAI)


OpenAIRE