How are human language comprehenders able to rapidly and effortlessly combine the words that they read and hear in order to construct larger representations of meaning like sentences? A nearly universal observation about this process is that it constructs multiple, hierarchically arranged levels of representation (e.g., phonological, syntactic, semantic), with higher levels synthesized from the elements at lower levels. Classic psycholinguistic models posit that computation traverses through the hierarchy of representations in a bottom-up manner, with lower levels of analysis preceding and guiding the higher levels (e.g., perceptual processing of the input occurs before lexical access; syntactic analysis occurs before semantic interpretation). While bottom-up constraints are clearly essential to language comprehension, experimental findings have also long suggested that the brainâ€™s language processing system is a network of parallel, interactive processes, in which high-level knowledge exerts top-down influences on lower-levels of processing (e.g., semantic influences on syntactic analysis). In recent years, interactive processing models have culminated in hypotheses that high-level knowledge can actually predict aspects of the future linguistic input. This anticipatory processing claim has fundamental implications for how we conceptualize the time course of language comprehension and the role of top-down constraints.
I will describe data from event-related brain potentials (ERP) studies in my laboratory, which illuminate the parallelism and interactivity between different levels of analysis including 1) demonstrations that semantic and discourse-level processing can dominate sentence understanding, operating in the face of direct opposition from syntactic cues and 2) demonstrations that semantic and syntactic processing can exert powerful, predictive influences on the earliest stages of visual word recognition.
If you would like to meet with the speaker, please contact Les Sikos.