Lucchese C, Nardini Fm, Pasumarthi Rk, Bruch S, Bendersky M, Wang X, Oosterhuis H, Jagerman R, De Rijke M
Deep learning Efficiency/effectiveness trade-off Unbiased learning Learning to rank
This tutorial aims to weave together diverse strands of modern Learning to Rank (LtR) research, and present them in a unified full-day tutorial. First, we will introduce the fundamentals of LtR, and an overview of its various sub-fields. Then, we will discuss some recent advances in gradient boosting methods such as LambdaMART by focusing on their efficiency/effectiveness trade-offs and optimizations. Subsequently, we will then present TF-Ranking, a new open source TensorFlow package for neural LtR models, and how it can be used for modeling sparse textual features. Finally, we will conclude the tutorial by covering unbiased LtR -- a new research field aiming at learning from biased implicit user feedback. The tutorial will consist of three two-hour sessions, each focusing on one of the topics described above. It will provide a mix of theoretical and hands-on sessions, and should benefit both academics interested in learning more about the current state-of-the-art in LtR, as well as practitioners who want to use LtR techniques in their applications.
@inproceedings{oai:it.cnr:prodotti:415713, title = {Learning to Rank in Theory and Practice: From Gradient Boosting to Neural Networks and Unbiased Learning}, author = {Lucchese C and Nardini Fm and Pasumarthi Rk and Bruch S and Bendersky M and Wang X and Oosterhuis H and Jagerman R and De Rijke M}, doi = {10.1145/3331184.3334824}, year = {2019} }