[1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. 2009. Curriculum Learning. In ICML. 41-48.
[2] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. TACL 5 (2017), 135-146.
[3] X. Chen and A. Gupta. 2015. Weakly Supervised Learning of Convolutional Networks. In International Conference on Computer Vision. 1431-1439.
[4] T. F. Coleman and Z. Wu. 1996. Parallel Continuation-based Global Optimization for Molecular Conformation and Protein Folding. Journal of Global Optimization 8, 1 (January 1996), 49-65.
[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. The Journal of Machine Learning Research 12 (August 2011), 2493-2537.
[6] Nick Craswell, Bhaskar Mitra, and Daniel Campos. 2019. Overview of the TREC 2019 Deep Learning Track. In TREC.
[7] Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In WSDM. 126- 134.
[8] Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, and Bernhard Schölkopf. 2017. Fidelity-Weighted Learning. In ICLR.
[9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL.
[10] Laura Dietz, Ben Gamari, Jef Dalton, and Nick Craswell. 2017. TREC Complex Answer Retrieval Overview. In TREC.
[11] N. Ferro, C. Lucchese, M. Maistro, and R. Perego. 2018. Continuation Methods and Curriculum Learning for Learning to Rank. In CIKM. 1523-1526.
[12] Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A Deep Relevance Matching Model for Ad-hoc Retrieval. In CIKM. 55-64. http://arxiv.org/abs/1711. 08611 arXiv: 1711.08611.
[13] Helia Hashemi, Mohammad Aliannejadi, Hamed Zamani, and W. Bruce Croft. 2019. ANTIQUE: A Non-Factoid Question Answering Benchmark. ArXiv abs/1905.08957 (2019).
[14] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional Neural Network Architectures for Matching Natural Language Sentences. In NIPS.
[15] B. Hu, Z. Lu, H. Li, and Q. Chen. 2014. Convolutional Neural Network Architectures for Matching Natural Language Sentences. In NIPS. 2042-2050.
[16] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM.
[17] Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval. In WSDM. ACM, 279-287.
[18] Shiyu Ji, Jinjin Shao, and Tao Yang. 2019. Eficient Interaction-based Neural Ranking with Locality Sensitive Hashing. In WWW.
[19] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhen-Zhong Lan, Shiguang Shan, and Alexander G. Hauptmann. 2014. Self-Paced Learning with Diversity. In NIPS.
[20] Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G. Hauptmann. 2015. Self-Paced Curriculum Learning. In AAAI.
[21] Jimmy Lin. 2018. The Neural Hype and Comparisons Against Weak Baselines. SIGIR Forum 52 (2018), 40-51.
[22] Jimmy Lin and Peilin Yang. 2019. The Impact of Score Ties on Repeatability in Document Ranking. In SIGIR. http://arxiv.org/abs/1807.05798 arXiv: 1807.05798.
[23] Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized Embeddings for Document Ranking. In SIGIR 2019.
[24] Sean MacAvaney, Andrew Yates, Arman Cohan, Luca Soldaini, Kai Hui, Nazli Goharian, and Ophir Frieder. 2018. Overcoming Low-Utility Facets for Complex Answer Retrieval. Information Retrieval Journal (2018).
[25] Donald Metzler and W. Bruce Croft. 2005. A Markov random field model for term dependencies. In SIGIR.
[26] Bhaskar Mitra and Nick Craswell. 2017. Neural Models for Information Retrieval. ArXiv abs/1705.01509 (2017).
[27] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. In CoCo@NIPS.
[28] Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv:1901.04085 (2019).
[29] Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document Expansion by Query Prediction. ArXiv abs/1904.08375 (2019).
[30] Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2016. A Study of MatchPyramid Models on Ad-hoc Retrieval. In NeuIR @ SIGIR.
[31] Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, and Xueqi Cheng. 2017. DeepRank: A New Deep Architecture for Relevance Ranking in Information Retrieval. In CIKM. 257-266.
[32] Gustavo Penha and Claudia Hauf. 2020. Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking. In ECIR.
[33] M. Qu, J. Tang, and J. Han. 2018. Curriculum Learning for Heterogeneous Star Network Embedding via Deep Reinforcement Learning. In WSDM. 468-476.
[34] Mrinmaya Sachan and Eric P. Xing. 2016. Easy Questions First? A Case Study on Curriculum Learning for Question Answering. In ACL.
[35] David W Scott. 2015. Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons.
[36] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search. In WWW.
[37] Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In SIGIR. 55-64. http://arxiv.org/abs/1706.06613 arXiv: 1706.06613.
[38] Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible Ranking Baselines Using Lucene. J. Data and Information Quality 10 (2018), 16:1-16:20.
[39] Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. 2019. Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Efectiveness Gains from Neural Ranking Models. In SIGIR.