9 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2022 Conference article Open Access OPEN
Detecting addiction, anxiety, and depression by users psychometric profiles
Monreale A, Iavarone B, Rossetto E, Beretta A
Detecting and characterizing people with mental disorders is an important task that could help the work of different healthcare professionals. Sometimes, a diagnosis for specific mental disorders requires a long time, possibly causing problems because being diagnosed can give access to support groups, treatment programs, and medications that might help the patients. In this paper, we study the problem of exploiting supervised learning approaches, based on users' psychometric profiles extracted from Reddit posts, to detect users dealing with Addiction, Anxiety, and Depression disorders. The empirical evaluation shows an excellent predictive power of the psychometric profile and that features capturing the post's content are more effective for the classification task than features describing the user writing style. We achieve an accuracy of 96% using the entire psychometric profile and an accuracy of 95% when we exclude from the user profile linguistic features.DOI: 10.1145/3487553.3524918
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Follow the flow: a prospective on the on-line detection of flow mental state through machine learning
Sajno E, Beretta A, Novielli N, Riva G
Flow is a precious mental status for achieving high sports performance. It is defined as an emotional state with high valence and high arousal levels. However, a viable detection system that could provide information about it in real-time is not yet recognized. The prospective work presented here aims to the creation of an online flow detection framework. A supervised machine learning model will be trained to predict valence and arousal levels, both on already existing databases and freshly collected physiological data. As final result, the definition of the minimally expensive (both in terms of sensors and time) amount of data needed to predict a flow status will enable the creation of a real-time detection interface of flow.DOI: 10.1109/metroxraine54828.2022.9967605
DOI: 10.31234/osf.io/9z5pe
Metrics:


See at: doi.org Open Access | CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
The explanation dialogues: understanding how legal experts reason about XAI methods
State L, Bringas Colmenarejo A, Beretta A, Ruggieri S, Turini F, Law S
The Explanation Dialogues project is an expert focus study that aims to uncover expectations, reasoning, and rules of legal experts and practitioners towards explainable artificial intelligence (XAI). We examine legal perceptions and disputes that arise in a fictional scenario that resembles a daily life situation - a bank's use of an automated decision-making (ADM) system to decide on credit allocation to individuals. Through this simulation, the study aims to provide insights into the legal value and validity of explanations of ADMs, identify potential gaps and issues that may arise in the context of compliance with European legislation, and provide guidance on how to address these shortcomings.Source: CEUR WORKSHOP PROCEEDINGS. Winterthur, Switzerland, 07-09/06/2023
Project(s): XAI via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Metta C, Beretta A, Guidotti R, Yin Y, Gallinari P, Rinzivillo S, Giannotti F
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.Source: INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS
DOI: 10.1007/s41060-023-00401-z
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: International Journal of Data Science and Analytics Open Access | CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Conference article Restricted
Interpretable data partitioning through tree-based clustering methods
Guidotti R, Landi C, Beretta A, Fadda D, Nanni M
Interpretable Data Partitioning Through Tree-Based Clustering Methods Riccardo Guidotti, Cristiano Landi, Andrea Beretta, Daniele Fadda & Mirco Nanni Conference paper First Online: 08 October 2023 311 Accesses Part of the Lecture Notes in Computer Science book series (LNAI,volume 14276) The growing interpretable machine learning research field is mainly focusing on the explanation of supervised approaches. However, also unsupervised approaches might benefit from considering interpretability aspects. While existing clustering methods only provide the assignment of records to clusters without justifying the partitioning, we propose tree-based clustering methods that offer interpretable data partitioning through a shallow decision tree. These decision trees enable easy-to-understand explanations of cluster assignments through short and understandable split conditions. The proposed methods are evaluated through experiments on synthetic and real datasets and proved to be more effective than traditional clustering approaches and interpretable ones in terms of standard evaluation measures and runtime. Finally, a case study involving human participation demonstrates the effectiveness of the interpretable clustering trees returned by the proposed method.DOI: 10.1007/978-3-031-45275-8_33
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2023 Journal article Open Access OPEN
Co-design of human-centered, explainable AI for clinical decision support
Panigutti C, Beretta A, Fadda D, Giannotti F, Pedreschi D, Perotti A, Rinzivillo S
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback with a two-fold outcome: First, we obtain evidence that explanations increase users' trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so we can re-design a better, more human-centered explanation interface.Source: ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, vol. 13 (issue 4)
DOI: 10.1145/3587271
Project(s): HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE
Metrics:


See at: dl.acm.org Open Access | Archivio istituzionale della Ricerca - Scuola Normale Superiore Open Access | CNR IRIS Open Access | ISTI Repository Open Access | ACM Transactions on Interactive Intelligent Systems Restricted | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Advancing dermatological diagnostics: interpretable AI for enhanced skin lesion classification
Metta C., Beretta A., Guidotti R., Yin Y., Gallinari P., Rinzivillo S., Giannotti F.
A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model’s ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model’s latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.Source: DIAGNOSTICS, vol. 14 (issue 7)
DOI: 10.3390/diagnostics14070753
Project(s): CREXDATA via OpenAIRE, TAILOR via OpenAIRE, Future Artificial Intelligence Research, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: Diagnostics Open Access | PubMed Central Open Access | Archivio istituzionale della Ricerca - Scuola Normale Superiore Open Access | CNR IRIS Open Access | www.mdpi.com Open Access | Archivio della Ricerca - Università di Pisa Restricted | Archivio della Ricerca - Università di Pisa Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
XAI in healthcare
Gezici G., Metta C, Beretta A., Pellungrini R., Rinzivillo S., Pedreschi D., Giannotti F.
The evolution of Explainable Artificial Intelligence (XAI) within healthcare represents a crucial turn towards more transparent, understandable, and patient-centric AI applications. The main objective is not only to increase the accuracy of AI models but also, and more importantly, to establish user trust in decision support systems through improving their interpretability. This extended abstract outlines the ongoing efforts and advancements of our lab addressing the challenges brought up by complex AI systems in healthcare domain. Currently, there are four main projects: Prostate Imaging Cancer AI, Liver Transplantation & Diabetes, Breast Cancer, and Doctor XAI, and ABELE.Source: CEUR WORKSHOP PROCEEDINGS, vol. 3825, pp. 69-73. Malmö, Sweden, 10-11/06/2024
Project(s): HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Towards transparent healthcare: advancing local explanation methods in Explainable Artificial Intelligence
Metta C., Beretta A., Pellungrini R., Rinzivillo S., Giannotti F.
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.Source: BIOENGINEERING, vol. 11 (issue 4)
DOI: 10.3390/bioengineering11040369
Project(s): CREXDATA via OpenAIRE, TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: Bioengineering Open Access | Bioengineering Open Access | CNR IRIS Open Access | www.mdpi.com Open Access | Software Heritage Restricted | IRIS Cnr Restricted | GitHub Restricted | IRIS Cnr Restricted | CNR IRIS Restricted