2021
Journal article  Open Access

Fine-grained visual textual alignment for cross-modal retrieval using transformer encoders

Messina N., Amato G., Esuli A., Falchi F., Gennaro C., Marchand-Maillet S.

Deep Learning  Cross-modal retrieval  Multi-modal matching  Computer vision  NLP 

Despite the evolution of deep-learning-based visual-textual processing systems, precise multi-modal matching remains a challenging task. In this work, we tackle the task of cross-modal retrieval through image-sentence matching based on word-region alignments, using supervision only at the global image-sentence level. Specifically, we present a novel approach called Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a fine-grained match between the underlying components of images and sentences, i.e., image regions and words, respectively, in order to preserve the informative richness of both modalities. TERAN obtains state-of-the-art results on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover, on MS-COCO, it also outperforms current approaches on the sentence retrieval task. Focusing on scalable cross-modal information retrieval, TERAN is designed to keep the visual and textual data pipelines well separated. Cross-attention links invalidate any chance to separately extract visual and textual features needed for the online search and the offline indexing steps in large-scale retrieval systems. In this respect, TERAN merges the information from the two domains only during the final alignment phase, immediately before the loss computation. We argue that the fine-grained alignments produced by TERAN pave the way towards the research for effective and efficient methods for large-scale cross-modal information retrieval. We compare the effectiveness of our approach against relevant state-of-the-art methods. On the MS-COCO 1K test set, we obtain an improvement of 5.7% and 3.5% respectively on the image and the sentence retrieval tasks on the Recall@1 metric. The code used for the experiments is publicly available on GitHub at https://github.com/mesnico/TERAN.

Source: ACM transactions on multimedia computing communications and applications 17 (2021). doi:10.1145/3451390

Publisher: Association for Computing Machinery,, New York, N.Y. , Stati Uniti d'America


Metrics



Back to previous page
BibTeX entry
@article{oai:it.cnr:prodotti:457546,
	title = {Fine-grained visual textual alignment for cross-modal retrieval using transformer encoders},
	author = {Messina N. and Amato G. and Esuli A. and Falchi F. and Gennaro C. and Marchand-Maillet S.},
	publisher = {Association for Computing Machinery,, New York, N.Y. , Stati Uniti d'America},
	doi = {10.1145/3451390},
	journal = {ACM transactions on multimedia computing communications and applications},
	volume = {17},
	year = {2021}
}
CNR ExploRA

Bibliographic record

ISTI Repository

Postprint version Open Access

DOI

10.1145/3451390

Also available from

dl.acm.orgRestricted

AI4EU
A European AI On Demand Platform and Ecosystem

AI4Media
A European Excellence Centre for Media, Society and Democracy


OpenAIRE