Workshop on Language Processing

Date
Oct 11, 2014, 11:30 amOct 12, 2014, 4:30 pm
Location
Friend Center Room 004

Details

Event Description

 

To attend the workshop, please RSVP to <[email protected]>

View the Friend Center on Google Maps.

Sat Oct 11th

 11:30-12:30 -- Kathy Hirsh-Pasek (Temple University)
 Trading spaces: Where “universal” components in events meet language

 12:30-1:30 -- Lunch

 1:30-2:30 -- Uri Hasson (Princeton University)
 Coupled neural systems underlie the production and comprehension of naturalistic narrative speech

 2:45-3:45 -- Delphine Dahan (UPenn)
 Speech comprehension in real time

 4:00-5:00 -- Casey Lew-Williams (Princeton University) 
Specific referential contexts shape efficiency in first and second language processing

 5:15-6:15 -- Ted Gibson (MIT) 
Cognition as shaped by culture: number, shape-bias and color names

 6:30 -- Reception and Dinner at Palmer House

 Sun Oct 12th

 9:30-9:45 -- Breakfast

 9:45-10:45 -- Anna Papafragou (University of Delaware)
 Events in language and cognition

 11:00-12:00 -- Ev Fedorenko (Massachusetts General Hospital)
 Processing structure in language and music: the contributions of specialized vs. overlapping mechanisms

 12:00-1:00 -- Lunch

 1:00-2:00 -- Kristen Syrett (Rutgers University)
 Semantic support for learning motion verbs

 2:15-3:15 -- Adele Goldberg (Princeton University)
 Explain me this:  learning what not to say

 3:30-4:30 -- John Trueswell (UPenn)
 Revise and resubmit: How real-time parsing preferences influence grammar acquisition 

 

Abstracts (more coming soon):

Kathy Hirsh-Pasek (Temple University)
 Trading spaces: Where “universal” components in events meet language

Relational terms (e.g., verbs and prepositions) are the cornerstone of language development, bringing together research in linguistic theory and infants’ event processing. To acquire relational terms, infants must parse the dynamic flow of events into the discrete and categorical units that will be mapped onto language.  This mapping is not transparent and varies across languages. In this talk, I present data from path/manner and figure/ground and some preliminary data from the study of cause and force dynamics to ask how infants perceive and conceptualize categorical components in dynamic events and how language interacts with the process to package these constructs. I present the thesis that infants start with language-general nonlinguistic constructs that are gradually tuned to the requirements of their native language. This work offers an alternative view to current discussions about the Whorfian hypothesis (Gleitman & Papafragou, 2013). Language does not alter non-language conceptual representations. Nor is language merely on-line “thinking for speaking.” Rather, language heightens and dampens attention to particular aspects of non-verbal events.

Delphine Dahan (UPenn)
 Speech comprehension in real time

Language is a system that combines discrete units into hierarchical structures from which meaning is derived. Yet speech, its most common physical instantiation, is a temporally-extending signal that provides continuously variable and ambiguous data. The interface between the gradient, continuous, properties of speech and the categorical and combinatorial structure of language is a topic of inquiry with far-reaching implications to our understanding of language processing and of cognition more generally. Work undertaken in my lab on the time course of speech comprehension is shedding light on this process. In particular, while very brief portion of speech can give rise to anticipating upcoming words or phrases, there is growing evidence that phonetic, semantic, and combinatorial properties of current information can modulate categorization of prior speech units. This finding suggests a temporal window over which the attribution of portions of the physical signal to elements of meaning remains flexible. Importantly, our work has also revealed limits to such flexibility. 

Anna Papafragou (University of Delaware)
 Events in language and cognition

The linguistic expression of events draws from basic, probably universal, elements of perceptual/cognitive structure. Nevertheless, little is known about how event cognition maps onto language production. Furthermore, languages differ in terms of how they segment and package events. This cross-linguistic variation raises the question whether the language one speaks could affect the way one thinks about events. This talk addresses how event cognition interfaces with language. Our studies reveal remarkable similarities in the way events are perceived, remembered and categorized despite differences in how events are encoded cross-linguistically. They also show that language-specific ways of encoding events affect learners’ conjectures about event predicates in their language, as well as the processes underlying speech planning cross-linguistically. 

Ev Fedorenko (Massachusetts General Hospital)
 Processing structure in language and music: the contributions of specialized vs. overlapping mechanisms

Over the last several years, a number of behavioral, ERP, MEG, and fMRI studies have argued for overlap in processing linguistic and musical structure (e.g., Patel et al., 1998; Maess et al., 2001; Koelsch et al., 2002; Koelsch et al., 2005; Fedorenko et al., 2009; Slevc et al., 2009; Hoch et al., 2011; see e.g., Koelsch, 2005, Slevc, 2012, or Tillmann, 2012, for reviews).  The presence of overlap in cognitive/neural mechanisms that support syntactic processing in music and language is sometimes taken as evidence against functional specialization in each domain.  However, the presence of overlapping circuits in no way bears on whether additional specialized circuits exist (e.g., Patel, 2003).  Indeed, double-dissociations between high-level linguistic and musical processing in patients with brain damage suggest at least some degree of independence between these two domains (e.g., Luria et al., 1965; Peretz, 1993; Dalla Bella & Peretz, 1999; Peretz & Coltheart, 2003; cf. Ustvedt, 1937; Patel et al., 2008).  I will summarize recent fMRI evidence for functional specialization for linguistic and musical structural processing.  In particular, a number of brain regions in the left frontal and temporal cortices that robustly respond to the presence of structure in the linguistic signal show little or no response to musical structure (Fedorenko, Behr & Kanwisher, 2011; also Rogalsky et al., 2011).  Furthermore, several regions in the bilateral temporal cortices that respond to the presence of structure in music are not sensitive to linguistic structure (Fedorenko, McDermott, Norman-Haignere & Kanwisher, 2012).  Complementing this dissociation in fMRI, I will present some recent evidence from global aphasics (joint work with Josh McDermott and Rosemary Varley), who appear to be able to process tonal structure, suggesting that an intact linguistic system is not necessary for the processing of musical syntax.  I will further argue that the overlap that has been observed between syntactic processing in language and music arises within a highly domain-general fronto-parietal network, the “multiple demand” network (e.g., Duncan, 2010), which includes parts of Broca’s area (Fedorenko, Duncan & Kanwisher, 2012).  I will conclude with some hypotheses about the roles of the specialized vs. domain-general regions in syntactic processing in each domain.