2020
Journal article  Open Access

RankEval: Evaluation and investigation of ranking models

Lucchese C., Muntean C. I., Nardini F. M., Perego R., Trani S.

Evaluation  Computer Science Applications  Analysis  Learning to Rank  Software 

RankEval is a Python open-source tool for the analysis and evaluation of ranking models based on ensembles of decision trees. Learning-to-Rank (LtR) approaches that generate tree-ensembles are considered the most effective solution for difficult ranking tasks and several impactful LtR libraries have been developed aimed at improving ranking quality and training efficiency. However, these libraries are not very helpful in terms of hyper-parameters tuning and in-depth analysis of the learned models, and even the implementation of most popular Information Retrieval (IR) metrics differ among them, thus making difficult to compare different models. RankEval overcomes these limitations by providing a unified environment where to perform an easy, comprehensive inspection and assessment of ranking models trained using different machine learning libraries. The tool focuses on ensuring efficiency, flexibility and extensibility and is fully interoperable with most popular LtR libraries.

Source: Softwarex (Amsterdam) 12 (2020). doi:10.1016/j.softx.2020.100614

Publisher: Elsevier B.V., Amsterdam, Paesi Bassi


Metrics



Back to previous page
BibTeX entry
@article{oai:it.cnr:prodotti:439137,
	title = {RankEval: Evaluation and investigation of ranking models},
	author = {Lucchese C. and Muntean C. I. and Nardini F.  M. and Perego R. and Trani S.},
	publisher = {Elsevier B.V., Amsterdam, Paesi Bassi},
	doi = {10.1016/j.softx.2020.100614},
	journal = {Softwarex (Amsterdam)},
	volume = {12},
	year = {2020}
}

BigDataGrapes
Big Data to Enable Global Disruption of the Grapevine-powered Industries


OpenAIRE