13 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2019 Journal article Open Access OPEN
The AI black box explanation problem
Guidotti R., Monreale A., Pedreschi D.
Explainable AI is an essential component of a "Human AI", i.e., an AI that expands human experience, instead of replacing it. It will be impossible to gain the trust of people in AI tools that make crucial decisions in an opaque way without explaining the rationale followed, especially in areas where we do not want to completely delegate decisions to machines.Source: ERCIM news (2019): 12–13.
Project(s): SoBigData via OpenAIRE

See at: ercim-news.ercim.eu Open Access | ISTI Repository Open Access | CNR ExploRA


2019 Conference article Open Access OPEN
On the stability of interpretable models
Guidotti R., Ruggieri S.
Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models.Source: IJCNN 2019 - International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14-19 July, 2019
DOI: 10.1109/ijcnn.2019.8852158
DOI: 10.48550/arxiv.1810.09352
Project(s): SoBigData via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | arxiv.org Open Access | ISTI Repository Open Access | doi.org Restricted | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Investigating neighborhood generation methods for explanations of obscure image classifiers
Guidotti R., Monreale A., Cariaggi L.
Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.Source: PAKDD, pp. 55–68, Macau, 14-17/04/2019
DOI: 10.1007/978-3-030-16148-4_5
Project(s): Track and Know via OpenAIRE, SoBigData via OpenAIRE
Metrics:


See at: arpi.unipi.it Open Access | Lecture Notes in Computer Science Restricted | link.springer.com Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Privacy risk for individual basket patterns
Pellungrini R., Monreale A., Guidotti R.
Retail data are of fundamental importance for businesses and enterprises that want to understand the purchasing behaviour of their customers. Such data is also useful to develop analytical services and for marketing purposes, often based on individual purchasing patterns. However, retail data and extracted models may also provide very sensitive information to possible malicious third parties. Therefore, in this paper we propose a methodology for empirically assessing privacy risk in the releasing of individual purchasing data. The experiments on real-world retail data show that although individual patterns describe a summary of the customer activity, they may be successful used for the customer re-identifiation.Source: PAP 2018, pp. 141–155, Dublin, Ireland, 10/09/2018 - 14/09/2018
DOI: 10.1007/978-3-030-13463-1_11
Project(s): SoBigData via OpenAIRE
Metrics:


See at: arpi.unipi.it Open Access | Lecture Notes in Computer Science Restricted | link.springer.com Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Personalized market basket prediction with temporal annotated recurring sequences
Guidotti R., Rossetti G., Pappalardo L., Giannotti F., Pedreschi D.
Nowadays, a hot challenge for supermarket chains is to offer personalized services to their customers. Market basket prediction, i.e., supplying the customer a shopping list for the next purchase according to her current needs, is one of these services. Current approaches are not capable of capturing at the same time the different factors influencing the customer's decision process: co-occurrence, sequentuality, periodicity and recurrency of the purchased items. To this aim, we define a pattern Temporal Annotated Recurring Sequence (TARS) able to capture simultaneously and adaptively all these factors. We define the method to extract TARS and develop a predictor for next basket named TBP (TARS Based Predictor) that, on top of TARS, is able to understand the level of the customer's stocks and recommend the set of most necessary items. By adopting the TBP the supermarket chains could crop tailored suggestions for each individual customer which in turn could effectively speed up their shopping sessions. A deep experimentation shows that TARS are able to explain the customer purchase behavior, and that TBP outperforms the state-of-the-art competitors.Source: IEEE transactions on knowledge and data engineering (Print) 31 (2019): 2151–2163. doi:10.1109/TKDE.2018.2872587
DOI: 10.1109/tkde.2018.2872587
Project(s): SoBigData via OpenAIRE
Metrics:


See at: Archivio della Ricerca - Università di Pisa Open Access | IEEE Transactions on Knowledge and Data Engineering Open Access | ISTI Repository Open Access | IEEE Transactions on Knowledge and Data Engineering Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
A survey of methods for explaining black box models
Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., Pedreschi D.
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.Source: ACM computing surveys 51 (2019). doi:10.1145/3236009
DOI: 10.1145/3236009
DOI: 10.48550/arxiv.1802.01933
Project(s): SoBigData via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Archivio istituzionale della Ricerca - Scuola Normale Superiore Open Access | dl.acm.org Open Access | ACM Computing Surveys Open Access | Archivio della Ricerca - Università di Pisa Open Access | ISTI Repository Open Access | ACM Computing Surveys Restricted | doi.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
The italian music superdiversity. Geography, emotion and language: one resource to find them, one resource to rule them all
Pollacci L., Guidotti R., Rossetti G., Giannotti F., Pedreschi D.
Globalization can lead to a growing standardization of musical contents. Using a cross-service multi-level dataset we investigate the actual Italian music scene. The investigation highlights the musical Italian superdiversity both individually analyzing the geographical and lexical dimensions and combining them. Using different kinds of features over the geographical dimension leads to two similar, comparable and coherent results, confirming the strong and essential correlation between melodies and lyrics. The profiles identified are markedly distinct one from another with respect to sentiment, lexicon, and melodic features. Through a novel application of a sentiment spreading algorithm and songs' melodic features, we are able to highlight discriminant characteristics that violate the standard regional political boundaries, reconfiguring them following the actual musical communicative practices.Source: Multimedia tools and applications (Dordrecht. Online) 78 (2019): 3297–3319. doi:10.1007/s11042-018-6511-6
DOI: 10.1007/s11042-018-6511-6
Project(s): SoBigData via OpenAIRE
Metrics:


See at: Multimedia Tools and Applications Open Access | Archivio della Ricerca - Università di Pisa Open Access | ISTI Repository Open Access | Multimedia Tools and Applications Restricted | link.springer.com Restricted | CNR ExploRA


2019 Report Open Access OPEN
ISTI Young Researcher Award "Matteo Dellepiane" - Edition 2019
Barsocchi P., Candela L., Crivello A., Esuli A., Ferrari A., Girardi M., Guidotti R., Lonetti F., Malomo L., Moroni D., Nardini F. M., Pappalardo L., Rinzivillo S., Rossetti G., Robol L.
The ISTI Young Researcher Award (YRA) selects yearly the best young staff members working at Institute of Information Science and Technologies (ISTI). This award focuses on quality and quantity of the scientific production. In particular, the award is granted to the best young staff members (less than 35 years old) by assessing their scientific production in the year preceding the award. This report documents the selection procedure and the results of the 2019 YRA edition. From the 2019 edition on the award is named as "Matteo Dellepiane", being dedicated to a bright ISTI researcher who prematurely left us and who contributed a lot to the YRA initiative from its early start.Source: ISTI Technical reports, 2019

See at: ISTI Repository Open Access | CNR ExploRA


2019 Journal article Open Access OPEN
Factual and counterfactual explanations for black box decision making
Guidotti R., Monreale A., Giannotti F., Pedreschi D., Ruggieri S., Turini F.
The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.Source: IEEE intelligent systems 34 (2019): 14–22. doi:10.1109/MIS.2019.2957223
DOI: 10.1109/mis.2019.2957223
Project(s): XAI via OpenAIRE, SoBigData via OpenAIRE, Humane AI via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | IEEE Intelligent Systems Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Learning data mining
Guidotti R., Monreale A., Rinzivillo S.
In the last decade the usage and study of data mining and machine learning algorithms have received an increasing attention from several and heterogeneous fields of research. Learning how and why a certain algorithm returns a particular result, and understanding which are the main problems connected to its execution is a hot topic in the education of data mining methods. In order to support data mining beginners, students, teachers, and researchers we introduce a novel didactic environment. The Didactic Data Mining Environment (DDME) allows to execute a data mining algorithm on a dataset and to observe the algorithm behavior step by step to learn how and why a certain result is returned. DDME can be practically exploited by teachers and students for having a more interactive learning of data mining. Indeed, on top of the core didactic library, we designed a visual platform that allows online execution of experiments and the visualization of the algorithm steps. The visual platform abstracts the coding activity and makes available the execution of algorithms to non-technicians.Source: DSAA, pp. 361–370, Turin, Italy, 1-4/10/2018
DOI: 10.1109/dsaa.2018.00047
Project(s): SoBigData via OpenAIRE
Metrics:


See at: arpi.unipi.it Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Conference article Closed Access
Exploring students eating habits through individual profiling and clustering analysis
Natilli M., Monreale A., Guidotti R., Pappalardo L.
Individual well-being strongly depends on food habits, therefore it is important to educate the general population, and especially young people, to the importance of a healthy and balanced diet. To this end, understanding the real eating habits of people becomes fundamental for a better and more effective intervention to improve the students' diet. In this paper we present two exploratory analyses based on centroid-based clustering that have the goal of understanding the food habits of university students. The first clustering analysis simply exploits the information about the students' food consumption of specific food categories, while the second exploratory analysis includes the temporal dimension in order to capture the information about when the students consume specific foods. The second approach enables the study of the impact of the time of consumption on the choice of the food.Source: PAP 2018 - The 2nd International Workshop on Personal Analytics and Privacy, pp. 156–171, Dublin, Ireland, 10-14 September 2018
DOI: 10.1007/978-3-030-13463-1_12
Project(s): SoBigData via OpenAIRE
Metrics:


See at: doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2019 Conference article Restricted
Helping your docker images to spread based on explainable models
Guidotti R., Soldani J., Neri D., Brogi A., Pedreschi D.
Docker is on the rise in today's enterprise IT. It permits shipping applications inside portable containers, which run from so-called Docker images. Docker images are distributed in public registries, which also monitor their popularity. The popularity of an image impacts on its actual usage, and hence on the potential revenues for its developers. In this paper, we present a solution based on interpretable decision tree and regression trees for estimating the popularity of a given Docker image, and for understanding how to improve an image to increase its popularity. The results presented in this work can provide valuable insights to Docker developers, helping them in spreading their images. Code related to this paper is available at: https://github.com/di-unipi-socc/DockerImageMiner.Source: ECML-PKDD 2018, pp. 205–221, Dublin, Ireland, 10/09/2018 - 14/09/2018
DOI: 10.1007/978-3-030-10997-4_13
Project(s): SoBigData via OpenAIRE
Metrics:


See at: doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Meaningful explanations of black box AI decision systems
Pedreschi D., Giannotti F., Guidotti R., Monreale A., Ruggieri S., Turini F.
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.Source: AAAI, pp. 9780–9784, Honolulu, 27/01/2019 - 01/02/2019
Project(s): SoBigData via OpenAIRE

See at: ISTI Repository Open Access | www.aaai.org Open Access | CNR ExploRA