14 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2020 Conference article Open Access OPEN
Self-Adapting Trajectory Segmentation
Bonavita A, Guidotti R, Nanni M
Identifying the portions of trajectory data where movement ends and a significant stop starts is a basic, yet fundamental task that can affect the quality of any mobility analytics process. Most of the many existing solutions adopted by researchers and practitioners are simply based on fixed spatial and temporal thresholds stating when the moving object remained still for a significant amount of time, yet such thresholds remain as static parameters for the user to guess. In this work we study the trajectory segmentation from a multi-granularity perspective, looking for a better understanding of the problem and for an automatic, parameter-free and user-adaptive solution that flexibly adjusts the segmentation criteria to the specific user under study. Experiments over real data and comparison against simple competitors show that the flexibility of the proposed method has a positive impact on results.Source: CEUR WORKSHOP PROCEEDINGS
Project(s): Track and Know via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2020 Conference article Restricted
Data-agnostic local neighborhood generation
Guidotti R., Monreale A.
Synthetic data generation has been widely adopted in software testing, data privacy, imbalanced learning, machine learning explanation, etc. In such contexts, it is important to generate data samples located within 'local' areas surrounding specific instances. Local synthetic data can help the learning phase of predictive models, and it is fundamental for methods explaining the local behavior of obscure classifiers. The contribution of this paper is twofold. First, we introduce a method based on generative operators allowing the synthetic neighborhood generation by applying specific perturbations on a given input instance. The key factor consists in performing a data transformation that makes applicable to any type of data, i.e., data-agnostic. Second, we design a framework for evaluating the goodness of local synthetic neighborhoods exploiting both supervised and unsupervised methodologies. A deep experimentation shows the effectiveness of the proposed method.DOI: 10.1109/icdm50108.2020.00122
Project(s): XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: dblp.uni-trier.de Restricted | doi.org Restricted | Archivio della Ricerca - Università di Pisa Restricted | IRIS Cnr Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2020 Contribution to book Open Access OPEN
Explaining multi-label black-box classifiers for health applications
Panigutti C, Guidotti R, Monreale A, Pedreschi D
Today the state-of-the-art performance in classification is achieved by the so-called âEURoeblack boxesâEUR, i.e. decision-making systems whose internal logic is obscure. Such models could revolutionize the health-care system, however their deployment in real-world diagnosis decision support systems is subject to several risks and limitations due to the lack of transparency. The typical classification problem in health-care requires a multi-label approach since the possible labels are not mutually exclusive, e.g. diagnoses. We propose MARLENA, a model-agnostic method which explains multi-label black box decisions. MARLENA explains an individual decision in three steps. First, it generates a synthetic neighborhood around the instance to be explained using a strategy suitable for multi-label decisions. It then learns a decision tree on such neighborhood and finally derives from it a decision rule that explains the black box decision. Our experiments show that MARLENA performs well in terms of mimicking the black box behavior while gaining at the same time a notable amount of interpretability through compact decision rules, i.e. rules with limited length.Source: STUDIES IN COMPUTATIONAL INTELLIGENCE (PRINT), pp. 97-110
DOI: 10.1007/978-3-030-24409-5_9
Metrics:


See at: media.springer.com Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2020 Conference article Open Access OPEN
Black box explanation by learning image exemplars in the latent feature space
Guidotti R, Monreale A, Matwin S, Pedreschi D
We present an approach to explain the decisions of black box models for image classification. While using the black box to label images, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to become counter-factuals by "morphing" into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Besides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability.DOI: 10.1007/978-3-030-46150-8_12
DOI: 10.48550/arxiv.2002.03746
Project(s): AI4EU via OpenAIRE, Track and Know via OpenAIRE, Track and Know via OpenAIRE, PRO-RES via OpenAIRE, SoBigData via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | arxiv.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | www.springerprofessional.de Open Access | doi.org Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2020 Conference article Restricted
Global explanations with local scoring
Setzu M, Guidotti R, Monreale A, Turini F
Artificial Intelligence systems often adopt machine learning models encoding complex algorithms with potentially unknown behavior. As the application of these "black box" models grows, it is our responsibility to understand their inner working and formulate them in human-understandable explanations. To this end, we propose a rule-based model-agnostic explanation method that follows a local-to-global schema: it generalizes a global explanation summarizing the decision logic of a black box starting from the local explanations of single predicted instances. We define a scoring system based on a rule relevance score to extract global explanations from a set of local explanations in the form of decision rules. Experiments on several datasets and black boxes show the stability, and low complexity of the global explanations provided by the proposed solution in comparison with baselines and state-of-the-art global explainers.Source: COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE (PRINT), pp. 159-171. Würzburg, Germany, 16-20 September, 2019
DOI: 10.1007/978-3-030-43823-4_14
Project(s): AI4EU via OpenAIRE, Track and Know via OpenAIRE, Track and Know via OpenAIRE, PRO-RES via OpenAIRE, XAI via OpenAIRE, SoBigData via OpenAIRE
Metrics:


See at: Communications in Computer and Information Science Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2020 Conference article Open Access OPEN
Data-Driven Location Annotation for Fleet Mobility Modeling
Guidotti R, Nanni M, Sbolgi F
The large availability of mobility data allows studying human behavior and human activities. However, this massive and raw amount of data generally lacks any detailed semantics or useful categorization. Annotations of the locations where the users stop may be helpful in a number of contexts, including user modeling and profiling, urban planning, activity recommendations, and can even lead to a deeper understanding of the mobility evolution of an urban area. In this paper, we foster the expressive power of individual mobility networks, a data model describing users' behavior, by defining a data-driven procedure for locations annotation. The procedure considers individual, collective, and contextual features for turning locations into annotated ones. The annotated locations own a high expressiveness that allows generalizing individual mobility networks, and that makes them comparable across different users. The results of our study on a dataset of trucks moving in Greece show that the annotated individual mobility networks can enable detailed analysis of urban areas and the planning of advanced mobility applications.Source: CEUR WORKSHOP PROCEEDINGS
Project(s): Track and Know via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2020 Journal article Open Access OPEN
Human migration: the big data perspective
Sîrbu A, Andrienko G, Andrienko N, Boldrini C, Conti M, Giannotti F, Guidotti R, Bertoli S, Kim J, Muntean Ci, Pappalardo L, Passarella A, Pedreschi D, Pollacci L, Pratesi F, Sharma R
How can big data help to understand the migration phenomenon? In this paper, we try to answer this question through an analysis of various phases of migration, comparing traditional and novel data sources and models at each phase. We concentrate on three phases of migration, at each phase describing the state of the art and recent developments and ideas. The first phase includes the journey, and we study migration flows and stocks, providing examples where big data can have an impact. The second phase discusses the stay, i.e. migrant integration in the destination country. We explore various data sets and models that can be used to quantify and understand migrant integration, with the final aim of providing the basis for the construction of a novel multi-level integration index. The last phase is related to the effects of migration on the source countries and the return of migrants.Source: INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, vol. 11, pp. 341-360
DOI: 10.1007/s41060-020-00213-5
Project(s): SoBigData via OpenAIRE
Metrics:


See at: International Journal of Data Science and Analytics Open Access | CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | HAL Clermont Université Restricted | CNR IRIS Restricted | Fraunhofer-ePrints Restricted


2020 Contribution to book Open Access OPEN
"Know thyself" how personal music tastes shape the last.fm online social network
Guidotti R, Rossetti G
As Nietzsche once wrote "Without music, life would be a mistake" (Twilight of the Idols, 1889.). The music we listen to reflects our personality, our way to approach life. In order to enforce self-awareness, we devised a Personal Listening Data Model that allows for capturing individual music preferences and patterns of music consumption. We applied our model to 30k users of Last.Fm for which we collected both friendship ties and multiple listening. Starting from such rich data we performed an analysis whose final aim was twofold: (i) capture, and characterize, the individual dimension of music consumption in order to identify clusters of like-minded Last.Fm users; (ii) analyze if, and how, such clusters relate to the social structure expressed by the users in the service. Do there exist individuals having similar Personal Listening Data Models? If so, are they directly connected in the social graph or belong to the same community?.DOI: 10.1007/978-3-030-54994-7_11
Project(s): Track and Know via OpenAIRE, Track and Know via OpenAIRE, SoBigData via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2020 Conference article Open Access OPEN
Interpretable next basket prediction boosted with representative recipes
Guidotti R., Viotto S.
Food is an essential element of our lives, cultures, and a crucial part of human experience. The study of food purchases can drive the design of practical services such as next basket predictor and shopping list reminder. Current approaches aimed at realizing these services do not exploit a contextual dimension involving food, i.e., recipes. To this aim, we design a next basket predictor based on representative recipes able to exploit the interest of customers towards certain ingredients when making the recommendation. The proposed method first identifies the representative recipes of a customer by analyzing her purchases and then estimates the rating of the items for the prediction. The ratings are based on both the purchases and the ingredients of the representative recipes. In addition, through our method, it is easy to justify why a specific set of items is predicted while such explanations are often not easily available in many other effective but opaque recommenders. Experimentation on a real-world dataset shows that the usage of recipes leverages the performance of existing next basket predictors.DOI: 10.1109/cogmi50398.2020.00018
Project(s): AI4EU via OpenAIRE, TAILOR via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: IRIS Cnr Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | arpi.unipi.it Restricted | doi.org Restricted | Archivio della Ricerca - Università di Pisa Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2020 Conference article Open Access OPEN
Explaining image classifiers generating exemplars and counter-exemplars from latent representations
Guidotti R., Monreale A., Matwin S., Pedreschi D.
We present an approach to explain the decisions of black box image classifiers through synthetic exemplar and counterexemplar learnt in the latent feature space. Our explanation method exploits the latent representations learned through an adversarial autoencoder for generating a synthetic neighborhood of the image for which an explanation is required. A decision tree is trained on a set of images represented in the latent space, and its decision rules are used to generate exemplar images showing how the original image can be modified to stay within its class. Counterfactual rules are used to generate counter-exemplars showing how the original image can "morph"into another class. The explanation also comprehends a saliency map highlighting the areas that contribute to its classification, and areas that push it into another class. A wide and deep experimental evaluation proves that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability, besides providing the most useful and interpretable explanations.DOI: 10.1609/aaai.v34i09.7116
Project(s): AI4EU via OpenAIRE, PRO-RES via OpenAIRE, SoBigData via OpenAIRE, Humane AI via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ojs.aaai.org Open Access | CNR IRIS Restricted


2020 Conference article Restricted
Explaining explanation methods
Guidotti R.
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to fulfill their tasks due to their high performance. Unfortunately, the most effective machine learning models use for their decision processes a logic not understandable from humans that makes them real black-box models. The lack of transparency on how AI systems make decisions is a clear limitation in their adoption in safety-critical and socially sensitive contexts. Consequently, since the applications in which AI are employed are various, research in eXplainable AI (XAI) has recently caught much attention, with specific distinct requirements for different types of explanations for different users. In this paper, we briefly present the existing explanation problems, the main strategies adopted to solve them, and the desiderata for XAI methods. Finally, the most common types of explanations are illustrated with references to state-of-the-art explanation methods able to retrieve them.Source: CEUR WORKSHOP PROCEEDINGS, vol. 2741, pp. 6-13. Online, 30/07/2020

See at: ceur-ws.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2020 Conference article Open Access OPEN
Explaining sentiment classification with synthetic exemplars and counter-exemplars
Lampridis O., Guidotti R., Ruggieri S.
We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences - albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 12323, pp. 357-373. Thessaloniki, Greece, 19-21/10/2020
DOI: 10.1007/978-3-030-61527-7_24
DOI: 10.60692/t16jb-rqr39
DOI: 10.60692/a9rts-w6786
Project(s): AI4EU via OpenAIRE, NoBIAS via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: PubMed Central Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | link.springer.com Open Access | Software Heritage Restricted | Archivio della Ricerca - Università di Pisa Restricted | doi.org Restricted | doi.org Restricted | GitHub Restricted | Archivio della Ricerca - Università di Pisa Restricted | CNR IRIS Restricted | Archivio della Ricerca - Università di Pisa Restricted | rd.springer.com Restricted


2020 Journal article Open Access OPEN
(So) Big Data and the transformation of the city
Andrienko G, Andrienko N, Boldrini C, Caldarelli G, Cintia P, Cresci S, Facchini A, Giannotti F, Gionis A, Guidotti R, Mathioudakis M, Muntean Ci, Pappalardo L, Pedreschi D, Pournaras E, Pratesi F, Tesconi M, Trasarti R
The exponential increase in the availability of large-scale mobility data has fueled the vision of smart cities that will transform our lives. The truth is that we have just scratched the surface of the research challenges that should be tackled in order to make this vision a reality. Consequently, there is an increasing interest among different research communities (ranging from civil engineering to computer science) and industrial stakeholders in building knowledge discovery pipelines over such data sources. At the same time, this widespread data availability also raises privacy issues that must be considered by both industrial and academic stakeholders. In this paper, we provide a wide perspective on the role that big data have in reshaping cities. The paper covers the main aspects of urban data analytics, focusing on privacy issues, algorithms, applications and services, and georeferenced data from social media. In discussing these aspects, we leverage, as concrete examples and case studies of urban data science tools, the results obtained in the "City of Citizens" thematic area of the Horizon 2020 SoBigData initiative, which includes a virtual research environment with mobility datasets and urban analytics methods developed by several institutions around Europe. We conclude the paper outlining the main research challenges that urban data science has yet to address in order to help make the smart city vision a reality.Source: INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, vol. 1
DOI: 10.1007/s41060-020-00207-3
Project(s): SoBigData via OpenAIRE
Metrics:


See at: Aaltodoc Publication Archive Open Access | International Journal of Data Science and Analytics Open Access | White Rose Research Online Open Access | HELDA - Digital Repository of the University of Helsinki Open Access | Archivio istituzionale della ricerca - Università degli Studi di Venezia Ca' Foscari Open Access | CNR IRIS Open Access | link.springer.com Open Access | International Journal of Data Science and Analytics Open Access | City Research Online Open Access | ISTI Repository Open Access | CNR IRIS Restricted | Fraunhofer-ePrints Restricted


2020 Journal article Open Access OPEN
Evaluating local explanation methods on ground truth
Guidotti R.
Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most common ways to assess the performance of an explanation method is to measure the fidelity of the explanation with respect to the classification of a black box model adopted by an Artificial Intelligent system for making a decision. However, this kind of evaluation only measures the degree of adherence of the local explainer in reproducing the behavior of the black box classifier with respect to the final decision. Therefore, the explanation provided by the local explainer could be different in the content even though it leads to the same decision of the AI system. In this paper, we propose an approach that allows to measure to which extent the explanations returned by local explanation methods are correct with respect to a synthetic ground truth explanation. Indeed, the proposed methodology enables the generation of synthetic transparent classifiers for which the reason for the decision taken, i.e., a synthetic ground truth explanation, is available by design. Experimental results show how the proposed approach allows to easily evaluate local explanations on the ground truth and to characterize the quality of local explanation methods . (c) 2020 The Author. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Source: ARTIFICIAL INTELLIGENCE, vol. 291
DOI: 10.1016/j.artint.2020.103428
Project(s): AI4EU via OpenAIRE, TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: Artificial Intelligence Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | Archivio della Ricerca - Università di Pisa Restricted | Archivio della Ricerca - Università di Pisa Restricted | CNR IRIS Restricted