Thursday, July 13, 2006

Content vs. Identity in language comprehension

The sensibility of a statement depends on the context, including who is making the statement. An article in ScienceNOW reports on a paper presented at the 5th Forum of European Neurosciences, where the presenters investigated the assumption that the brain processes the content of what has been said and then the information about who said it afterward in a serial fashion.


Using EEG, the researchers found a spike 200 to 300ms after the onset of an unexpected word given a particular speaker. The example used in the story was, "I wish I looked like Britney Spears.", where "Britney" was unexpected for a male talker vs. a female talker. That sentence itself raising interest questions about social expectation and sexual orientation, but that aside, the more general finding is that people do not comprehend language in a vacuum.

Here is the conference abstract:

First author: Van Berkum, Jos (speaker)

Symposium S05-3 (session 031). Sun 09/07, 09:45 - Hall F1
Language from a brain perspective
Abstract A031.3, published in FENS Forum Abstracts, vol. 3, 2006.
Ref.: FENS Abstr., vol.3, A031.3, 2006

Author(s) Van Berkum J. J. A. (1, 2)
Addresse(s) (1) Fac. Social & Behav. Sci., Psychonomics, Univ.
Amsterdam, Amsterdam, The Netherlands; (2) FC Donders centre for
Cognitive Neuroimaging, Nijmegen, The Netherlands
Title Discourse and the brain.
Text We've known for some time that electroencephalography can
provide valuable information about the nature of the human language
comprehension system. However, because using brain measures is
difficult enough as it is, most language researchers have until
recently limited their use of EEG to the comprehension of
decontextualized single sentences. In my talk, I will review what we
have learned from our initial attempts to study comprehension of
sentences in a wider discourse (e. g. piece of text) by means of
event-related brain potentials. One thing we observe time and again is
that listeners and readers extremely rapidly relate the words of an
unfolding sentence to what the wider discourse is about. For example,
if the meaning of a spoken word does not fit this wider context, the
processing consequences show up in the EEG at some 150 milliseconds
after the word's acoustic onset, and well before the word has been
fully pronounced. Listeners and readers also very rapidly determine
who is being referred to by expressions such as "the girl" or "he",
sometimes within some 250-300 milliseconds. In addition, our work
reveals that listeners and readers go beyond such rapid reactive
word-by-word processing, and can actually use their knowledge of the
wider discourse proactively, to predict specific upcoming words in
real time as the sentence unfolds. Finally, the evidence suggests that
listeners immediately work out to what extent the unfolding message
fits with what they know about the speaker (or can infer from his or
her voice). The moral of all this is that even though discourse-level
computations are complex, 'high-level', and essentially open-ended,
the human brain can use them to analyze and extrapolate linguistic
input within a split second. I will argue that to keep track of the
various processes involved, the time-resolved multidimensional human
EEG is a particularly useful source of information (see also Van
Berkum, 2004, downloadable from here).

0 Comments:

Post a Comment

<< Home