We introduce a novel, scalable method aimed at annotating potential and actual Questions Under Discussion (QUDs) in naturalistic discourse. It consists of asking naive participants first what questions a certain portion of the discourse evokes for them and subsequently which of those end up being answered as the discourse proceeds. This paper
outlines the method and design decisions that went into it and on characterizing highlevel properties of the resulting data. We highlight ways in which the ...
We introduce a novel, scalable method aimed at annotating potential and actual Questions Under Discussion (QUDs) in naturalistic discourse. It consists of asking naive participants first what questions a certain portion of the discourse evokes for them and subsequently which of those end up being answered as the discourse proceeds. This paper
outlines the method and design decisions that went into it and on characterizing highlevel properties of the resulting data. We highlight ways in which the data gathered via
our method could inform our understanding of QUD-driven phenomena and QUD models
themselves. We also provide access to a visualization tool for viewing the evoked questions
we gathered using this method (N=4765 from 111 crowdsourced annotators).
+