|dc.description.abstract||Although language usually occurs in an interactive and world-situated context (Clark, 1996), most research on language use to date has studied comprehension and production in isolation. This thesis combines research on comprehension and production, and explores the links between them. Its main focus is on the coordination of visual attention between speakers and listeners, as well as the influence this has on the language they use and the ease with which they understand it.
Experiment 1 compared participants’ eye movements during comprehension and production of similar sentences: in a syntactic priming task, they first heard a confederate describe an image using active or passive voice, and then described the same kind of picture themselves (cf. Branigan, Pickering, & Cleland, 2000). As expected, the primary influence on eye movements in both tasks was the unfolding sentence structure. In addition, eye movements during target production were affected by the structure of the prime sentence. Eye movements in comprehension were linked more loosely with speech, reflecting the ongoing integration of listeners’ interpretations with the visual context and other conceptual factors.
Experiments 2-7 established a novel paradigm to explore how seeing where a speaker was looking during unscripted production would facilitate identification of the objects they were describing in a photographic scene. Visual coordination in these studies was created artificially through an on-screen cursor which reflected the speaker’s original eye movements (cf. Brennan, Chen, Dickinson, Neider, & Zelinsky, 2007). A series of spatial and temporal manipulations of the link between cursor and speech investigated the respective influences of linguistic and visual information at different points in the comprehension process. Implications and potential future applications are discussed, as well as the relevance of this kind of visual cueing to the processing of real gaze in face-to-face interaction.||en