A5: Distributing Referential Information across Modalities
Project A5 directly investigates the hypothesis that – in situated comprehension – information may be distributed across several modalities. The project takes as its starting point robust experimental evidence that both the visual referential context, as well as visual signals generated by the speaker, such as gaze, influence the listener’s expectations about what is likely to be mentioned. The planned research thus seeks to extend current language-centric notions of surprisal both by quantifying the effect of visual context and visual signals on surprisal, and evaluating the hypothesis that non-linguistic signals can be viewed as a means of distributing information more uniformly in situated communication. The results of this project will also be relevant to visual worlds studies that are planned in A1, A3 and A4.
29th Annual CUNY Conference on Human Sentence Processing (CUNY), Gainesville, FL, 2016.
Language Processing: Cognitive Load with(out) Visual Context Inproceedings
Proceedings at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP), Bilbao, Spain, 2016.
Proceedings of the 9th Embodied and Situated Language Processing Conference (ESLP), Pucón, 2016.
The influence of visual context on predictions in sentence processing: Evidence from ICA Inproceedings
Proceedings at the Language and Perception International Conference, Trondheim, Norwegen, 2016.
Cognitive load in the visual world: The facilitatory effect of gaze Inproceedings
Gunzelmann, G; Howes, A; Tenbrink, T; Davelaar, E J (Ed.): 39th Annual Meeting of the Cognitive Science Society, London, UK, 0000.