52 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2020 Part of book or chapter of book Open Access OPEN

Explaining multi-label black-box classifiers for health applications
Panigutti C., Guidotti R., Monreale A., Pedreschi D.
Today the state-of-the-art performance in classification is achieved by the so-called âEURoeblack boxesâEUR, i.e. decision-making systems whose internal logic is obscure. Such models could revolutionize the health-care system, however their deployment in real-world diagnosis decision support systems is subject to several risks and limitations due to the lack of transparency. The typical classification problem in health-care requires a multi-label approach since the possible labels are not mutually exclusive, e.g. diagnoses. We propose MARLENA, a model-agnostic method which explains multi-label black box decisions. MARLENA explains an individual decision in three steps. First, it generates a synthetic neighborhood around the instance to be explained using a strategy suitable for multi-label decisions. It then learns a decision tree on such neighborhood and finally derives from it a decision rule that explains the black box decision. Our experiments show that MARLENA performs well in terms of mimicking the black box behavior while gaining at the same time a notable amount of interpretability through compact decision rules, i.e. rules with limited length.Source: Precision Health and Medicine. A Digital Revolution in Healthcare, edited by Arash Shaban-Nejad, Martin Michalowski, pp. 97–110, 2020
DOI: 10.1007/978-3-030-24409-5_9

See at: Unknown Repository Open Access | Unknown Repository Restricted | Unknown Repository Restricted | link.springer.com Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2020 Conference object Open Access OPEN

Black box explanation by learning image exemplars in the latent feature space
Guidotti R., Monreale A., Matwin S., Pedreschi D.
We present an approach to explain the decisions of black box models for image classification. While using the black box to label images, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to become counter-factuals by "morphing" into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Besides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability.Source: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2019, pp. 189–205, Wurzburg, Germany, 16-20 September, 2019
DOI: 10.1007/978-3-030-46150-8_12
Project(s): AI4EU via OpenAIRE, Track and Know via OpenAIRE, PRO-RES via OpenAIRE, SoBigData via OpenAIRE

See at: arXiv.org e-Print Archive Open Access | Unknown Repository Open Access | ISTI Repository Open Access | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted | www.springerprofessional.de Restricted


2020 Conference object Restricted

Global explanations with local scoring
Setzu M., Guidotti R., Monreale A., Turini F.
Artificial Intelligence systems often adopt machine learning models encoding complex algorithms with potentially unknown behavior. As the application of these "black box" models grows, it is our responsibility to understand their inner working and formulate them in human-understandable explanations. To this end, we propose a rule-based model-agnostic explanation method that follows a local-to-global schema: it generalizes a global explanation summarizing the decision logic of a black box starting from the local explanations of single predicted instances. We define a scoring system based on a rule relevance score to extract global explanations from a set of local explanations in the form of decision rules. Experiments on several datasets and black boxes show the stability, and low complexity of the global explanations provided by the proposed solution in comparison with baselines and state-of-the-art global explainers.Source: Joint European Conference on Machine Learning and Knowledge Discovery in Databases - ECML PKDD 2019, pp. 159–171, Würzburg, Germany, 16-20 September, 2019
DOI: 10.1007/978-3-030-43823-4_14
Project(s): AI4EU via OpenAIRE, Track and Know via OpenAIRE, PRO-RES via OpenAIRE, SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2020 Conference object Open Access OPEN

Self-Adapting Trajectory Segmentation
Bonavita A., Guidotti R., Nanni M.
Identifying the portions of trajectory data where movement ends and a significant stop starts is a basic, yet fundamental task that can affect the quality of any mobility analytics process. Most of the many existing solutions adopted by researchers and practitioners are simply based on fixed spatial and temporal thresholds stating when the moving object remained still for a significant amount of time, yet such thresholds remain as static parameters for the user to guess. In this work we study the trajectory segmentation from a multi-granularity perspective, looking for a better understanding of the problem and for an automatic, parameter-free and user-adaptive solution that flexibly adjusts the segmentation criteria to the specific user under study. Experiments over real data and comparison against simple competitors show that the flexibility of the proposed method has a positive impact on results.Source: International Workshop in Big Mobility Data Analytics - EDBT/ICDT Workshops, 30/03/2020
Project(s): Track and Know via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2020 Conference object Open Access OPEN

Data-Driven Location Annotation for Fleet Mobility Modeling
Guidotti R., Nanni M., Sbolgi F.
The large availability of mobility data allows studying human behavior and human activities. However, this massive and raw amount of data generally lacks any detailed semantics or useful categorization. Annotations of the locations where the users stop may be helpful in a number of contexts, including user modeling and profiling, urban planning, activity recommendations, and can even lead to a deeper understanding of the mobility evolution of an urban area. In this paper, we foster the expressive power of individual mobility networks, a data model describing users' behavior, by defining a data-driven procedure for locations annotation. The procedure considers individual, collective, and contextual features for turning locations into annotated ones. The annotated locations own a high expressiveness that allows generalizing individual mobility networks, and that makes them comparable across different users. The results of our study on a dataset of trucks moving in Greece show that the annotated individual mobility networks can enable detailed analysis of urban areas and the planning of advanced mobility applications.Source: International Workshop in Big Mobility Data Analytics - EDBT/ICDT Workshops, 30/03/2020
Project(s): Track and Know via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2020 Article Open Access OPEN

(So) Big Data and the transformation of the city
Andrienko G., Andrienko N., Boldrini C., Caldarelli G., Cintia P., Cresci S., Facchini A., Giannotti F., Gionis A., Guidotti R., Mathioudakis M., Muntean C. I., Pappalardo L., Pedreschi D., Pournaras E., Pratesi F., Tesconi M., Trasarti R.
The exponential increase in the availability of large-scale mobility data has fueled the vision of smart cities that will transform our lives. The truth is that we have just scratched the surface of the research challenges that should be tackled in order to make this vision a reality. Consequently, there is an increasing interest among different research communities (ranging from civil engineering to computer science) and industrial stakeholders in building knowledge discovery pipelines over such data sources. At the same time, this widespread data availability also raises privacy issues that must be considered by both industrial and academic stakeholders. In this paper, we provide a wide perspective on the role that big data have in reshaping cities. The paper covers the main aspects of urban data analytics, focusing on privacy issues, algorithms, applications and services, and georeferenced data from social media. In discussing these aspects, we leverage, as concrete examples and case studies of urban data science tools, the results obtained in the "City of Citizens" thematic area of the Horizon 2020 SoBigData initiative, which includes a virtual research environment with mobility datasets and urban analytics methods developed by several institutions around Europe. We conclude the paper outlining the main research challenges that urban data science has yet to address in order to help make the smart city vision a reality.Source: International Journal of Data Science and Analytics (Print) 1 (2020). doi:10.1007/s41060-020-00207-3
DOI: 10.1007/s41060-020-00207-3
Project(s): SoBigData via OpenAIRE, SoBigData-PlusPlus via OpenAIRE

See at: Archivio istituzionale della ricerca - Università degli Studi di Venezia Ca' Foscari Open Access | link.springer.com Open Access | International Journal of Data Science and Analytics Open Access | City Research Online Open Access | ISTI Repository Open Access | Fraunhofer-ePrints Open Access | CNR ExploRA Open Access | International Journal of Data Science and Analytics Restricted | Archivio della Ricerca - Università di Pisa Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted


2020 Article Open Access OPEN

Human migration: the big data perspective
Sîrbu A., Andrienko G., Andrienko N., Boldrini C., Conti M., Giannotti F., Guidotti R., Bertoli S., Kim J., Muntean C. I., Pappalardo L., Passarella A., Pedreschi D., Pollacci L., Pratesi F., Sharma R.
How can big data help to understand the migration phenomenon? In this paper, we try to answer this question through an analysis of various phases of migration, comparing traditional and novel data sources and models at each phase. We concentrate on three phases of migration, at each phase describing the state of the art and recent developments and ideas. The first phase includes the journey, and we study migration flows and stocks, providing examples where big data can have an impact. The second phase discusses the stay, i.e. migrant integration in the destination country. We explore various data sets and models that can be used to quantify and understand migrant integration, with the final aim of providing the basis for the construction of a novel multi-level integration index. The last phase is related to the effects of migration on the source countries and the return of migrants.Source: International Journal of Data Science and Analytics (Online) (2020). doi:10.1007/s41060-020-00213-5
DOI: 10.1007/s41060-020-00213-5
Project(s): SoBigData via OpenAIRE

See at: City Research Online Open Access | Fraunhofer-ePrints Open Access | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | International Journal of Data Science and Analytics Restricted | link.springer.com | CNR ExploRA


2020 Part of book or chapter of book Open Access OPEN

"Know thyself" how personal music tastes shape the last.fm online social network
Guidotti R., Rossetti G.
As Nietzsche once wrote "Without music, life would be a mistake" (Twilight of the Idols, 1889.). The music we listen to reflects our personality, our way to approach life. In order to enforce self-awareness, we devised a Personal Listening Data Model that allows for capturing individual music preferences and patterns of music consumption. We applied our model to 30k users of Last.Fm for which we collected both friendship ties and multiple listening. Starting from such rich data we performed an analysis whose final aim was twofold: (i) capture, and characterize, the individual dimension of music consumption in order to identify clusters of like-minded Last.Fm users; (ii) analyze if, and how, such clusters relate to the social structure expressed by the users in the service. Do there exist individuals having similar Personal Listening Data Models? If so, are they directly connected in the social graph or belong to the same community?.Source: Formal Methods. FM 2019 International Workshops Porto, Portugal, October 7-11, 2019, Revised Selected Papers, Part I, edited by Sekerinski E. et al., pp. 146–161, 2020
DOI: 10.1007/978-3-030-54994-7_11
Project(s): Track and Know via OpenAIRE, SoBigData via OpenAIRE

See at: ISTI Repository Open Access | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | link.springer.com Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2019 Article Open Access OPEN

A survey of methods for explaining black box models
Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., Pedreschi D.
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.Source: ACM computing surveys 51 (2019). doi:10.1145/3236009
DOI: 10.1145/3236009
Project(s): SoBigData via OpenAIRE

See at: arXiv.org e-Print Archive Open Access | dl.acm.org Open Access | ACM Computing Surveys Open Access | Archivio della Ricerca - Università di Pisa Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted | ACM Computing Surveys Restricted


2019 Article Open Access OPEN

The italian music superdiversity. Geography, emotion and language: one resource to find them, one resource to rule them all
Pollacci L., Guidotti R., Rossetti G., Giannotti F., Pedreschi D.
Globalization can lead to a growing standardization of musical contents. Using a cross-service multi-level dataset we investigate the actual Italian music scene. The investigation highlights the musical Italian superdiversity both individually analyzing the geographical and lexical dimensions and combining them. Using different kinds of features over the geographical dimension leads to two similar, comparable and coherent results, confirming the strong and essential correlation between melodies and lyrics. The profiles identified are markedly distinct one from another with respect to sentiment, lexicon, and melodic features. Through a novel application of a sentiment spreading algorithm and songs' melodic features, we are able to highlight discriminant characteristics that violate the standard regional political boundaries, reconfiguring them following the actual musical communicative practices.Source: Multimedia tools and applications (Dordrecht. Online) 78 (2019): 3297–3319. doi:10.1007/s11042-018-6511-6
DOI: 10.1007/s11042-018-6511-6
Project(s): SoBigData via OpenAIRE

See at: Archivio della Ricerca - Università di Pisa Open Access | ISTI Repository Open Access | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | link.springer.com Restricted | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | CNR ExploRA Restricted


2019 Report Open Access OPEN

ISTI Young Researcher Award "Matteo Dellepiane" - Edition 2019
Barsocchi P., Candela L., Crivello A., Esuli A., Ferrari A., Girardi M., Guidotti R., Lonetti F., Malomo L., Moroni D., Nardini F. M., Pappalardo L., Rinzivillo S., Rossetti G., Robol L.
The ISTI Young Researcher Award (YRA) selects yearly the best young staff members working at Institute of Information Science and Technologies (ISTI). This award focuses on quality and quantity of the scientific production. In particular, the award is granted to the best young staff members (less than 35 years old) by assessing their scientific production in the year preceding the award. This report documents the selection procedure and the results of the 2019 YRA edition. From the 2019 edition on the award is named as "Matteo Dellepiane", being dedicated to a bright ISTI researcher who prematurely left us and who contributed a lot to the YRA initiative from its early start.Source: ISTI Technical reports, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Article Open Access OPEN

Factual and counterfactual explanations for black box decision making
Guidotti R., Monreale A., Giannotti F., Pedreschi D., Ruggieri S., Turini F.
The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.Source: IEEE intelligent systems 34 (2019): 14–22. doi:10.1109/MIS.2019.2957223
DOI: 10.1109/MIS.2019.2957223
DOI: 10.1109/mis.2019.2957223
Project(s): SoBigData via OpenAIRE, Humane AI via OpenAIRE

See at: IEEE Intelligent Systems Open Access | Archivio della Ricerca - Università di Pisa Open Access | ISTI Repository Open Access | IEEE Intelligent Systems Restricted | IEEE Intelligent Systems Restricted | IEEE Intelligent Systems Restricted | IEEE Intelligent Systems Restricted | IEEE Intelligent Systems Restricted | CNR ExploRA Restricted | IEEE Intelligent Systems Restricted


2019 Conference object Open Access OPEN

Meaningful explanations of black box AI decision systems
Pedreschi D., Giannotti F., Guidotti R., Monreale A., Ruggieri S., Turini F.
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.Source: AAAI, pp. 9780–9784, Honolulu, 27/01/2019 - 01/02/2019
Project(s): SoBigData via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.aaai.org Open Access


2019 Article Open Access OPEN

The AI black box explanation problem
Guidotti R., Monreale A., Pedreschi D.
Explainable AI is an essential component of a "Human AI", i.e., an AI that expands human experience, instead of replacing it. It will be impossible to gain the trust of people in AI tools that make crucial decisions in an opaque way without explaining the rationale followed, especially in areas where we do not want to completely delegate decisions to machines.Source: ERCIM news (2019): 12–13.
Project(s): SoBigData via OpenAIRE

See at: ercim-news.ercim.eu Open Access | CNR ExploRA Open Access


2019 Conference object Open Access OPEN

On the stability of interpretable models
Guidotti R., Ruggieri S.
Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models.Source: IJCNN 2019 - International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14-19 July, 2019
DOI: 10.1109/IJCNN.2019.8852158
DOI: 10.1109/ijcnn.2019.8852158
Project(s): SoBigData via OpenAIRE

See at: arXiv.org e-Print Archive Open Access | Unknown Repository Open Access | ISTI Repository Open Access | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted


2019 Conference object Restricted

Investigating neighborhood generation methods for explanations of obscure image classifiers
Guidotti R., Monreale A., Cariaggi L.
Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.Source: PAKDD, pp. 55–68, Macau, 14-17/04/2019
DOI: 10.1007/978-3-030-16148-4_5
Project(s): Track and Know via OpenAIRE, SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Archivio della Ricerca - Università di Pisa Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted | Unknown Repository Restricted


2019 Conference object Restricted

Learning data mining
Guidotti R., Monreale A., Rinzivillo S.
In the last decade the usage and study of data mining and machine learning algorithms have received an increasing attention from several and heterogeneous fields of research. Learning how and why a certain algorithm returns a particular result, and understanding which are the main problems connected to its execution is a hot topic in the education of data mining methods. In order to support data mining beginners, students, teachers, and researchers we introduce a novel didactic environment. The Didactic Data Mining Environment (DDME) allows to execute a data mining algorithm on a dataset and to observe the algorithm behavior step by step to learn how and why a certain result is returned. DDME can be practically exploited by teachers and students for having a more interactive learning of data mining. Indeed, on top of the core didactic library, we designed a visual platform that allows online execution of experiments and the visualization of the algorithm steps. The visual platform abstracts the coding activity and makes available the execution of algorithms to non-technicians.Source: DSAA, pp. 361–370, Turin, Italy, 1-4/10/2018
DOI: 10.1109/DSAA.2018.00047
DOI: 10.1109/dsaa.2018.00047
Project(s): SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | Unknown Repository Restricted


2019 Conference object Restricted

Privacy risk for individual basket patterns
Pellungrini R., Monreale A., Guidotti R.
Retail data are of fundamental importance for businesses and enterprises that want to understand the purchasing behaviour of their customers. Such data is also useful to develop analytical services and for marketing purposes, often based on individual purchasing patterns. However, retail data and extracted models may also provide very sensitive information to possible malicious third parties. Therefore, in this paper we propose a methodology for empirically assessing privacy risk in the releasing of individual purchasing data. The experiments on real-world retail data show that although individual patterns describe a summary of the customer activity, they may be successful used for the customer re-identifiation.Source: PAP 2018, pp. 141–155, Dublin, Ireland, 10/09/2018 - 14/09/2018
DOI: 10.1007/978-3-030-13463-1_11
Project(s): SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2019 Conference object Restricted

Exploring students eating habits through individual profiling and clustering analysis
Natilli M., Monreale A., Guidotti R., Pappalardo L.
Individual well-being strongly depends on food habits, therefore it is important to educate the general population, and especially young people, to the importance of a healthy and balanced diet. To this end, understanding the real eating habits of people becomes fundamental for a better and more effective intervention to improve the students' diet. In this paper we present two exploratory analyses based on centroid-based clustering that have the goal of understanding the food habits of university students. The first clustering analysis simply exploits the information about the students' food consumption of specific food categories, while the second exploratory analysis includes the temporal dimension in order to capture the information about when the students consume specific foods. The second approach enables the study of the impact of the time of consumption on the choice of the food.Source: PAP 2018 - The 2nd International Workshop on Personal Analytics and Privacy, pp. 156–171, Dublin, Ireland, 10-14 September 2018
DOI: 10.1007/978-3-030-13463-1_12
Project(s): SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | link.springer.com Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2019 Conference object Restricted

Helping your docker images to spread based on explainable models
Guidotti R., Soldani J., Neri D., Brogi A., Pedreschi D.
Docker is on the rise in today's enterprise IT. It permits shipping applications inside portable containers, which run from so-called Docker images. Docker images are distributed in public registries, which also monitor their popularity. The popularity of an image impacts on its actual usage, and hence on the potential revenues for its developers. In this paper, we present a solution based on interpretable decision tree and regression trees for estimating the popularity of a given Docker image, and for understanding how to improve an image to increase its popularity. The results presented in this work can provide valuable insights to Docker developers, helping them in spreading their images. Code related to this paper is available at: https://github.com/di-unipi-socc/DockerImageMiner.Source: ECML-PKDD 2018, pp. 205–221, Dublin, Ireland, 10/09/2018 - 14/09/2018
DOI: 10.1007/978-3-030-10997-4_13
Project(s): SoBigData via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted