October 14, 2019

12:00 pm / 1:00 pm

In this talk, I will mainly discuss a task-based approach to measure the effect of various pretraining objectives (e.g., language modeling, CCG supertagging) of sentence encoders on their understanding of function words. The tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words in English (e.g., prepositions, wh-words). Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretrainingstate-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably. Overall, no pretraining objective dominates across the board, and our function word probing tasks highlight several intuitive differences between pretraining objectives. In addition to these findings, I will discuss ongoing follow-up works and some promising future directions for probing analyses in NLP.