14 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2023 Conference article Open Access OPEN
Score vs. winrate in score-based games: which reward for reinforcement learning?
Pasqualini L, Parton M, Morandin F, Amato G, Gini R, Metta C, Fantozzi M, Marchetti A
In the last years, DeepMind algorithm AlphaZero has become the state of the art to efficiently tackle perfect information two-player zero-sum games with a win/lose outcome. However, when the win/lose outcome is decided by a final score difference, AlphaZero may play score-suboptimal moves, because all winning final positions are equivalent from the win/lose outcome perspective. This can be an issue, for instance when used for teaching, or when trying to understand whether there is a better move. Moreover, there is the theoretical quest of the perfect game. A naive approach would be training a AlphaZero-like agent to predict score differences instead of win/lose outcomes. Since the game of Go is deterministic, this should as well produce outcome-optimal play. However, it is a folklore belief that "this does not work".In this paper we first provide empirical evidence to this belief. We then give a theoretical interpretation of this suboptimality in a general perfect information two-player zero-sum game where the complexity of a game like Go is replaced by randomness of the environment. We show that an outcome-optimal policy has a different preference for uncertainty when it is winning or losing. In particular, when in a losing state, an outcome-optimal agent chooses actions leading to a higher variance of the score. We then posit that when approximation is involved, a deterministic game behaves like a nondeterministic game, where the score variance is modeled by how uncertain the position is. We validate this hypothesis in a AlphaZero-like software with a human expert.DOI: 10.1109/icmla55696.2022.00099
DOI: 10.48550/arxiv.2201.13176
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | doi.org Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2025 Conference article Open Access OPEN
SwitchPath enhancing exploration in neural networks learning dynamics
Di Cecco A., Papini A., Metta C., Fantozzi M., Galfré S. G., Morandin F., Parton M.
We introduce SwitchPath, a novel stochastic activation function that enhances neural network exploration, performance, and generalization, by probabilistically toggling between the activation of a neuron and its negation. SwitchPath draws inspiration from the analogies between neural networks and decision trees, and from the exploratory and regularizing properties of DropOut as well. Unlike Dropout, which intermittently reduces network capacity by deactivating neurons, Switch- Path maintains continuous activation, allowing networks to dynamically explore alternative information pathways while fully utilizing their capacity. Building on the concept of ε-greedy algorithms to balance exploration and exploitation, SwitchPath enhances generalization capabilities over traditional activation functions. The exploration of alternative paths happens during training without sacrificing computational efficiency. This paper presents the theoretical motivations, practical implementations, and empirical results, showcasing all the described advantages of SwitchPath over established stochastic activation mechanisms.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15243 - Proceedings, Part I, pp. 275-291. Pisa, Italy, 14-16/10/2024
DOI: 10.1007/978-3-031-78977-9_18
Project(s): SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2024 Conference article Restricted
Predicting the failure of component X in the Scania dataset with graph neural networks
Parton M., Fois A., Veglio M., Metta C., Gregnanin M.
We use Graph Neural Networks on signature-augmented graphs derived from time series for Predictive Maintenance. With this technique, we propose a solution to the Intelligent Data Analysis Industrial Challenge 2024 on the newly released SCANIA Component X dataset. We describe an Exploratory Data Analysis and preprocessing of the dataset, proposing improvements for its description in the SCANIA paper.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 14642, pp. 251-259. Stockholm, Sweden, 24-26/04/2024
DOI: 10.1007/978-3-031-58553-1_20
Project(s): SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2023 Conference article Open Access OPEN
Artificial intelligence and renegotiation of commercial lease contracts affected by pandemic-related contingencies from Covid-19. The project A.I.A.Co.
Parton M., Angelone M., Metta C., D'Ovidio S., Massarelli R., Moscardelli L., Amato G.
This paper aims to investigate the possibility of using artificial intelligence (AI) to resolve the legal issues raised by the Covid-19 emergency about the fate of continuing execution contracts, or those with deferred or periodic execution, as well as, more generally, to deal with exceptional events and contingencies. We first study whether the Italian legal system allows for ''maintenance'' remedies to cope with contingencies and to avoid the termination of the contract, while ensuring effective protection of the interests of both parties. We then give a complete and technical description of an AI-based predictive framework, aimed at assisting both the Magistrate (in the course of litigation) and the parties themselves (in out-of-court proceedings) in the redetermination of the rent of commercial lease contracts. This framework, called A.I.A.Co. for Artificial Intelligence for contract law Against Covid-19, has been developed under the Italian grant ''Fondo Integrativo Speciale per la Ricerca''.DOI: 10.48550/arxiv.2210.09515
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | ARUdA Restricted | doi.org Restricted | GitHub Restricted | ARUdA Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2025 Conference article Open Access OPEN
A systematization of the Wagner framework graph theory conjectures and reinforcement learning
Angileri F., Lombardi G., Fois A., Faraone R., Metta C., Salvi M., Bianchi L. A., Fantozzi M., Galfrè S. G., Pavesi D., Parton M., Morandin F.
In 2021, Adam Zsolt Wagner proposed an approach to disprove conjectures in graph theory using Reinforcement Learning (RL). Wagner frames a conjecture as f(G) < 0 for every graph G, for a certain invariant f; one can then play a single-player graph-building game, where at each turn the player decides whether to add an edge or not. The game ends when all edges have been considered, resulting in a certain graph GT , and f(GT ) is the final score of the game; RL is then used to maximize this score. This brilliant idea is as simple as innovative, and it lends itself to systematic generalization. Several different single-player graph-building games can be employed, along with various RL algorithms. Moreover, RL maximizes the cumulative reward, allowing for step-by-step rewards instead of a single final score, provided the final cumulative reward represents the quantity of interest f(GT ). In this paper, we discuss these and various other choices that can be significant in Wagner’s framework. As a contribution to this systematization, we present four distinct single-player graph-building games. Each game employs both a step-by-step reward system and a single final score. We also propose a principled approach to select the most suitable neural network architecture for any given conjecture and introduce a new dataset of graphs labeled with their Laplacian spectra. The games have been implemented as environments in the Gymnasium framework, and along with the dataset and a simple interface to play with the environments, are available at https://github.com/CuriosAI/graph_conjectures.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15243 - Proceedings, Part I, pp. 325-338. Pisa, Italy, 14-16/10/2024
DOI: 10.1007/978-3-031-78977-9_21
Project(s): SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
GloNets: Globally Connected Neural Networks
Di Cecco A., Metta C., Fantozzi M., Morandin F., Parton M.
Deep learning architectures suffer from depth-related performance degradation, limiting the effective depth of neural networks. Approaches like ResNet are able to mitigate this, but they do not completely eliminate the problem. We introduce Globally Connected Neural Networks (GloNet), a novel architecture overcoming depth-related issues, designed to be superimposed on any model, enhancing its depth without increasing complexity or reducing performance. With GloNet, the network's head uniformly receives information from all parts of the network, regardless of their level of abstraction. This enables GloNet to self-regulate information flow during training, reducing the influence of less effective deeper layers, and allowing for stable training irrespective of network depth. This paper details GloNet's design, its theoretical basis, and a comparison with existing similar architectures. Experiments show GloNet's self-regulation ability and resilience to depth-related learning challenges, like performance degradation. Our findings suggest GloNet as a strong alternative to traditional architectures like ResNets.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 14641, pp. 53-64
DOI: 10.1007/978-3-031-58547-0_5
DOI: 10.48550/arxiv.2311.15947
Metrics:


See at: arXiv.org e-Print Archive Open Access | IRIS Cnr Restricted | doi.org Restricted | IRIS Cnr Restricted | CNR IRIS Restricted | IRIS Cnr Restricted


2024 Conference article Open Access OPEN
Increasing biases can be more efficient than increasing weights
Carlo Metta, Marco Fantozzi, Andrea Papini, Gianluca Amato, Matteo Bergamaschi, Silvia Giulia Galfrè, Alessandro Marchetti, Michelangelo Vegliò, Maurizio Parton, Francesco Morandin
We introduce a novel computational unit for neural networks that features multiple biases, challenging the traditional perceptron structure. This unit emphasizes the importance of preserving uncorrupted information as it is passed from one unit to the next, applying activation functions later in the process with specialized biases for each unit. Through both empirical and theoretical analyses, we show that by focusing on increasing biases rather than weights, there is potential for significant enhancement in a neural network model's performance. This approach offers an alternative perspective on optimizing information flow within neural networks. Commented source code at https://github. com/CuriosAI/dac-dev.DOI: 10.1109/wacv57701.2024.00279
DOI: 10.48550/arxiv.2301.00924
Project(s): SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | ARUdA Open Access | ARUdA Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | Software Heritage Restricted | doi.org Restricted | doi.org Restricted | GitHub Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Exemplars and counterexemplars explanations for image classifiers, targeting skin lesion labeling
Metta C, Guidotti R, Yin Y, Gallinari P, Rinzivillo S
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans by making the decisions of the formers understandable. This is particularly important in sensitive contexts like in the medical domain. We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples. Our framework consists of a trained classifier onto which an explanation module operates. The latter is able to offer the practitioner exemplars and counterexemplars for the classification diagnosis thus allowing the physician to interact with the automatic diagnosis system. The exemplars are generated via an adversarial autoencoder. We illustrate the behavior of the system on representative examples.Source: PROCEEDINGS - IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS. Athens, Greece, 5-8/09/2021
DOI: 10.1109/iscc53001.2021.9631485
Project(s): AI4EU via OpenAIRE, TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Exemplars and counterexemplars explanations for skin lesion classifiers
Metta C, Guidotti R, Yin Y, Gallinari P, Rinzivillo S
Explainable AI consists in developing models allowing interaction between decision systems and humans by making the decisions understandable. We propose a case study for skin lesion diagnosis showing how it is possible to provide explanations of the decisions of deep neural network trained to label skin lesions.DOI: 10.3233/faia220209
Project(s): HumanE-AI-Net via OpenAIRE
Metrics:


See at: ebooks.iospress.nl Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2025 Conference article Open Access OPEN
Explainable AI in time-sensitive scenarios prefetched offline explanation model
Russo F. M., Metta C., Monreale A., Rinzivillo S., Pinelli F.
As predictive machine learning models become increasingly adopted and advanced, their role has evolved from merely predicting outcomes to actively shaping them. This evolution has underscored the importance of Trustworthy AI, highlighting the necessity to extend our focus beyond mere accuracy and toward a comprehensive understanding of these models’ behaviors within the specific contexts of their applications. To further progress in explainability, we introduce poem, Prefetched Offline Explanation Model, a model-agnostic, local explainability algorithm for image data. The algorithm generates exemplars, counterexemplars and saliency maps to provide quick and effective explanations suitable for time-sensitive scenarios. Leveraging an existing local algorithm, poem infers factual and counterfactual rules from data to create illustrative examples and opposite scenarios with an enhanced stability by design. A novel mechanism then matches incoming test points with an explanation base and produces diverse exemplars, informative saliency maps and believable counterexemplars. Experimental results indicate that poem outperforms its predecessor abele in speed and ability to generate more nuanced and varied exemplars alongside more insightful saliency maps and valuable counterexemplars.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15244 - Proceedings, Part II, pp. 167-182. Pisa, Italy, 14-16/10/2024
DOI: 10.1007/978-3-031-78980-9_11
Project(s): TANGO via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Metta C, Beretta A, Guidotti R, Yin Y, Gallinari P, Rinzivillo S, Giannotti F
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.Source: INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS
DOI: 10.1007/s41060-023-00401-z
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: International Journal of Data Science and Analytics Open Access | CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Advancing dermatological diagnostics: interpretable AI for enhanced skin lesion classification
Metta C., Beretta A., Guidotti R., Yin Y., Gallinari P., Rinzivillo S., Giannotti F.
A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model’s ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model’s latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.Source: DIAGNOSTICS, vol. 14 (issue 7)
DOI: 10.3390/diagnostics14070753
Project(s): CREXDATA via OpenAIRE, TAILOR via OpenAIRE, Future Artificial Intelligence Research, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: Diagnostics Open Access | PubMed Central Open Access | Archivio istituzionale della Ricerca - Scuola Normale Superiore Open Access | CNR IRIS Open Access | www.mdpi.com Open Access | Archivio della Ricerca - Università di Pisa Restricted | Archivio della Ricerca - Università di Pisa Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
XAI in healthcare
Gezici G., Metta C, Beretta A., Pellungrini R., Rinzivillo S., Pedreschi D., Giannotti F.
The evolution of Explainable Artificial Intelligence (XAI) within healthcare represents a crucial turn towards more transparent, understandable, and patient-centric AI applications. The main objective is not only to increase the accuracy of AI models but also, and more importantly, to establish user trust in decision support systems through improving their interpretability. This extended abstract outlines the ongoing efforts and advancements of our lab addressing the challenges brought up by complex AI systems in healthcare domain. Currently, there are four main projects: Prostate Imaging Cancer AI, Liver Transplantation & Diabetes, Breast Cancer, and Doctor XAI, and ABELE.Source: CEUR WORKSHOP PROCEEDINGS, vol. 3825, pp. 69-73. Malmö, Sweden, 10-11/06/2024
Project(s): HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Towards transparent healthcare: advancing local explanation methods in Explainable Artificial Intelligence
Metta C., Beretta A., Pellungrini R., Rinzivillo S., Giannotti F.
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.Source: BIOENGINEERING, vol. 11 (issue 4)
DOI: 10.3390/bioengineering11040369
Project(s): CREXDATA via OpenAIRE, TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: Bioengineering Open Access | Bioengineering Open Access | CNR IRIS Open Access | www.mdpi.com Open Access | Software Heritage Restricted | IRIS Cnr Restricted | GitHub Restricted | IRIS Cnr Restricted | CNR IRIS Restricted