2017
Conference article  Restricted

Efficient Data Structures for Massive N-Gram Datasets

Pibiri G. E., Venturini R.

Language Models  Performance  Data Compression  Elias-Fano 

The effcient indexing of large and sparse N-gram datasets is crucial in several applications in Information Retrieval, Natural Language Processing and Machine Learning. Because of the stringent efficiency requirements, dealing with billions of N-grams poses the challenge of introducing a compressed representation that preserves the query processing speed. In this paperwe study the problem of reducing the space required by the representation of such datasets, maintaining the capability of looking up for a given N-gram within micro seconds. For this purpose we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state-of-The-Art software packages. In particular, we present a trie data structure in which each word following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we are able to lower the space of representation to compression levels that were never achieved before. Despite the significant savings in space, we show that our technique introduces a negligible penalty at query time.

Source: International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 615–624, Tokyo, Giappone, 7-11/08/2017


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:385704,
	title = {Efficient Data Structures for Massive N-Gram Datasets},
	author = {Pibiri G.  E. and Venturini R.},
	doi = {10.1145/3077136.3080798},
	booktitle = {International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 615–624, Tokyo, Giappone, 7-11/08/2017},
	year = {2017}
}