6 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2023 Conference article Open Access OPEN
Explaining learning to rank methods to improve them
Veneri A.
State-of-the-art methods for Learning to Rank (LtR), either designed for tabular or textual data, are incredibly complex. Increasing the complexity of the models has many drawbacks, including difficulties in understanding the logic behind each prediction and a lack of trust in the system during its deployment. In this paper, which describes the author's goals during his Ph.D., there is an analysis and discussion of how we can use the ideas and tools coming from the eXplainable Artificial Intelligence (XAI) field to make the most effective methods for LtR understandable to the practitioners with the final goal of making them more efficient and/or understand better when they can be improved. The strategies adopted to achieve the aforementioned goals are different and based on the type of models analyzed, which go from more traditional LtR models based on ensembles of decision trees and using handcrafted features to fairly new neural LtR models using text data.Source: CIKM '23 - 32nd ACM International Conference on Information and Knowledge Management, pp. 5185–5188, Birmingham, UK, 21-25/10/2023
DOI: 10.1145/3583780.3616002
Metrics:


See at: dl.acm.org Open Access | CNR ExploRA


2023 Conference article Closed Access
A theoretical framework for AI models explainability with application in biomedicine
Rizzo M., Veneri A., Albarelli A., Lucchese C., Nobile M., Conati C.
EXplainable Artificial Intelligence (XAI) is a vibrant research topic in the artificial intelligence community. It is raising growing interest across methods and domains, especially those involving high stake decision-making, such as the biomedical sector. Much has been written about the subject, yet XAI still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that synthesizes what can be found in the literature. We recognize that explanations are not atomic but the combination of evidence stemming from the model and its input-output mapping, and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation is an accurate description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation seems convincing to the user). Our theoretical framework simplifies how these properties are operationalized, and it provides new insights into common explanation methods that we analyze as case studies. We also discuss the impact that our framework could have in biomedicine, a very sensitive application domain where XAI can have a central role in generating trust.Source: CIBCB 2023 - IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Eindhoven, The Netherlands, 29-31/08/2023
DOI: 10.1109/cibcb56990.2023.10264877
Metrics:


See at: CNR ExploRA


2023 Conference article Open Access OPEN
GAM Forest explanation
Lucchese C., Orlando S., Perego R., Veneri A.
Most accurate machine learning models unfortunately produce black-box predictions, for which it is impossible to grasp the internal logic that leads to a specific decision. Unfolding the logic of such black-box models is of increasing importance, especially when they are used in sensitive decision-making processes. In thisworkwe focus on forests of decision trees, which may include hundreds to thousands of decision trees to produce accurate predictions. Such complexity raises the need of developing explanations for the predictions generated by large forests.We propose a post hoc explanation method of large forests, named GAM-based Explanation of Forests (GEF), which builds a Generalized Additive Model (GAM) able to explain, both locally and globally, the impact on the predictions of a limited set of features and feature interactions.We evaluate GEF over both synthetic and real-world datasets and show that GEF can create a GAM model with high fidelity by analyzing the given forest only and without using any further information, not even the initial training dataset.Source: EDBT 2022 - 26th International Conference on Extending Database Technology, pp. 171–182, Ioannina, Greece, 28-31/03/2023
DOI: 10.48786/edbt.2023.14
Metrics:


See at: ISTI Repository Open Access | openproceedings.org Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
ILMART: interpretable ranking with constrained LambdaMART
Lucchese C., Nardini F. M., Orlando S., Perego R., Veneri A.
Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models. While most of the previous research efforts focus on creating post-hoc explanations, in this paper we investigate how to train effective and intrinsically-interpretable ranking models. Developing these models is particularly challenging and it also requires finding a trade-off between ranking quality and model complexity. State-of-the-art rankers, made of either large ensembles of trees or several neural layers, exploit in fact an unlimited number of feature interactions making them black boxes. Previous approaches on intrinsically-interpretable ranking models address this issue by avoiding interactions between features thus paying a significant performance drop with respect to full-complexity models. Conversely, ILMART, our novel and interpretable LtR solution based on LambdaMART, is able to train effective and intelligible models by exploiting a limited and controlled number of pairwise feature interactions. Exhaustive and reproducible experiments conducted on three publicly-available LtR datasets show that ILMART outperforms the current state-of-the-art solution for interpretable ranking of a large margin with a gain of nDCG of up to 8%.Source: SIGIR '22 - 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2255–2259, Madrid, Spain, 11-15/07/2022
DOI: 10.1145/3477495.3531840
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | CNR ExploRA


2022 Contribution to conference Open Access OPEN
Interpretable ranking using LambdaMART (Abstract)
Lucchese C., Nardini F. M., Orlando S., Perego R., Veneri A.
Source: IIR 2022 - 12th Italian Information Retrieval Workshop 2022, Milano, Italy, 29-30/06/2022

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
Can embeddings analysis explain large language model ranking?
Lucchese C., Minello G., Nardini F. M., Orlando S., Perego R., Veneri A.
Understanding the behavior of deep neural networks for Information Retrieval (IR) is crucial to improve trust in these effective models. Current popular approaches to diagnose the predictions made by deep neural networks are mainly based on: i) the adherence of the retrieval model to some axiomatic property of the IR system, ii) the generation of free-text explanations, or iii) feature importance attributions. In this work, we propose a novel approach that analyzes the changes of document and query embeddings in the latent space and that might explain the inner workings of IR large pre-trained language models. In particular, we focus on predicting query/document relevance, and we characterize the predictions by analyzing the topological arrangement of the embeddings in their latent space and their evolution while passing through the layers of the network. We show that there exists a link between the embedding adjustment and the predicted score, based on how tokens cluster in the embedding space. This novel approach, grounded in the query and document tokens interplay over the latent space, provides a new perspective on neural ranker explanation and a promising strategy for improving the efficiency of the models and Query Performance Prediction (QPP).Source: CIKM '23 - 32nd ACM International Conference on Information and Knowledge Management, pp. 4150–4154, Birmingham, UK, 21-25/10/2023
DOI: 10.1145/3583780.3615225
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA