Not all arguments are processed equally: a distributional model of argument complexity
Abstract
This work addresses some questions about language processing: what does it mean that natural language sentences are semantically complex? What semantic features can determine different degrees of difficulty for human comprehenders? Our goal is to introduce a framework for argument semantic complexity, in which the processing difficulty depends on the typicality of the arguments in the sentence, that is, their degree of compatibility with the selectional constraints of the predicate. We postulate that complexity depends on the difficulty of building a semantic representation of the event or the situation conveyed by a sentence. This representation can be either retrieved directly from the semantic memory or built dynamically by solving the constraints included in the stored representations. To support this postulation, we built a Distributional Semantic Model to compute a compositional cost function for the sentence unification process. Our evaluation on psycholinguistic datasets reveals that the model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and the processing of logical metonymies.
Link to publication in Springer Link