2023
Conference article  Open Access

Explaining learning to rank methods to improve them

Veneri A.

Explainable Artificial Intelligence  Learning to Rank  Large Language Models  Text Ranking 

State-of-the-art methods for Learning to Rank (LtR), either designed for tabular or textual data, are incredibly complex. Increasing the complexity of the models has many drawbacks, including difficulties in understanding the logic behind each prediction and a lack of trust in the system during its deployment. In this paper, which describes the author's goals during his Ph.D., there is an analysis and discussion of how we can use the ideas and tools coming from the eXplainable Artificial Intelligence (XAI) field to make the most effective methods for LtR understandable to the practitioners with the final goal of making them more efficient and/or understand better when they can be improved. The strategies adopted to achieve the aforementioned goals are different and based on the type of models analyzed, which go from more traditional LtR models based on ensembles of decision trees and using handcrafted features to fairly new neural LtR models using text data.

Source: CIKM '23 - 32nd ACM International Conference on Information and Knowledge Management, pp. 5185–5188, Birmingham, UK, 21-25/10/2023

Publisher: ACM, Association for computing machinery, New York, USA


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:488076,
	title = {Explaining learning to rank methods to improve them},
	author = {Veneri A.},
	publisher = {ACM, Association for computing machinery, New York, USA},
	doi = {10.1145/3583780.3616002},
	booktitle = {CIKM '23 - 32nd ACM International Conference on Information and Knowledge Management, pp. 5185–5188, Birmingham, UK, 21-25/10/2023},
	year = {2023}
}