Tag Archives: Psycholinguistics

Welcome Roman Feiman!!!!!!

The Department of Cognitive, Linguistic and Psychological Sciences (CLPS) is delighted to welcome our new psycholinguist Roman Feiman who joins us as of September, 2018 as an Assistant Professor.  Roman received his PhD in Psychology from Harvard University in 2015. He was a postdoctoral fellow at Harvard for a year, and at UC San Diego for another two. His work draws on a variety of approaches and methods from cognitive developmental psychology, language acquisition, psycholinguistics, and formal semantics. Now at Brown, he directs the brand new Brown Language and Thought lab. You can find the lab here: https://sites.brown.edu/bltlab/

Over the next few years Roman will be teaching – among other things – courses on language processing (CLPS 1800), on child language acquisition of syntax, semantics and pragmatics (CLPS 1660), a seminar on Logic in Language and Thought, and co-teaching with Ellie Pavlich a course on Machine and Human Learning. Stay tuned for other courses.  Welcome Roman!

Colloquium (11/1/2012): Terry Au (University of Hong Kong)

Access to Childhood Language Memory

All adults seem to have amnesia about much that happened in their childhood. Does early memory simply wither away through massive synaptic pruning and cell death in early brain development? Or, is it just masked by interference from later experience? This talk explores these questions in the specific case of childhood language memory. Research into the re-learning of long-disused childhood languages turns out to have much to offer. It provides relatively objective evidence for access to early childhood memory in adulthood via re-learning. It complements linguistic deprivation research to highlight the special status of childhood language experience in phonology and morphosyntax acquisition. It thereby suggests a strategy to salvage seemingly forgotten childhood languages, which are often also heritage languages. Equally importantly, re-learning childhood languages may well open a window onto how language affects cognitive development not only during, but also well beyond, the childhood years.

LingLang Lunch (11/14/2012): Brian Dillon (University of Massachusetts Amherst)

Syntactic complexity across the at-issue / not-at-issue divide

Much work in psycholinguistics has been dedicated to uncovering the source of complexity effects in syntactic processing (Chomsky & Miller 1963; Gibson, 1998; Levy, 2007; Lewis, 1996; Lewis & Vasishth, 2005; Yngve, 1960; i.a.). There are many theoretical accounts of syntactic complexity effects, starting from Chomsky and Miller’s (1963) observations on the difficulty of self-embedding, to the introduction of new discourse referents while simultaneously maintaining syntactic predictions (Gibson, 1998), among many others. One recent and influential model attempts to reduce syntactic complexity to interference effects related to memory retrieval (Lewis & Vasishth, 2005). In the present talk I present joint work with Lyn Frazier and Chuck Clifton that investigates the source of syntactic complexity by looking how the at-issue / not-at-issue distinction relates to syntactic complexity effects. Not-at-issue content like appositives and parentheticals do not directly contribute to the truth conditions of a sentence, and so have been argued to form a separate ‘dimension’ of meaning (Potts, 2005). In a series of judgment experiments, it is seen that syntactic complexity in the not-at-issue dimension does not lead to complexity effects in offline judgments, while complexity in at-issue content does. I then present eye-tracking data that helps to locate the source of the complexity effects in online comprehension. The results provide initial evidence that i) the parser distinguishes at-issue and not-at-issue content, and ii) the complexity effects observed in the present data cannot be reduced to retrieval interference. I suggest that at-issue / not-at-issue distinction is used to structure parsing routines by maintaining distinct stacks for different types of linguistic content, thereby minimizing complexity for the sentence as a whole.

LingLang Lunch (9/18/2013): Eva Wittenberg (Tufts University)

Close but no cigar: The differences between kissing, giving kisses, and giving other things

Light verb constructions, such as “Julius is giving Ellie a kiss”, create a mismatch at the syntax-semantics interface. Typically, each argument in a sentence corresponds to one semantic role, such as in “Julius is giving Ellie a present”, where Julius is the Source, Ellie the Goal, and the present the Theme. However, a light verb construction such as “Julius is giving Ellie a kiss” with three arguments describes the same event as the transitive “Julius kissed Ellie” with two arguments: Julius is the Agent, and Ellie the Patient.
This leads to several questions: First, how are light verb constructions such as “giving a kiss” processed differently from sentences such as “giving a present” ? Second, at which structural level of representation would we find sources of this difference? Third, what is the effect of using a light verb construction such as “giving a kiss” as opposed to “kissing” on the event representation created in a listener? I will present data from an ERP study, an eye-tracking study, and several behavioral studies to answer these questions.

LingLang Lunch (10/2/2013): Josh Hartshorne (MIT)

Syntax, Semantics, World Knowledge, and Reference

Consider these examples from Winograd (1972):

(1) The city council denied the protesters a permit because they feared violence.
(2) The city council denied the protesters a permit because they advocated violence.

Most people reliably attribute different interpretations to they in (1-2), though in principle in each case the pronoun could refer to the city council, the protesters, or someone else. Levesque (2012) has argued that solving such sentences draws on such a wide range of cognitive abilities that it is an even stronger test of human intelligence than the original Turing Test.

Psycholinguists, too, have been interested in ambiguous pronouns. In 1974, Garvey and Caramazza demonstrated that people have strong expectations about the meanings of pronouns even without having heard the potentially critical end of the sentence:

(3) The city council denied the protesters a permit because they…
(4) Sally frightened Mary because she…
(5) Alfred liked Bernard because he…

These intuitions can be modified by such a bewildering range of contextual manipulations that here, too, many commentators resorted to attributing pronoun reference to inference over ill-specified concepts such as “event structure” (Pickering & Majid, 2007) or “salience” (Song & Fisher, 2004).

In this talk, while I concede that pronoun reference is very difficult and that, in the limit, it requires a broad swath of cognition, we nonetheless are already in a position to say quite a lot about it. Much of the complexity of the phenomena reduce to the interactions of a small number of abstract structures in semantics and discourse. I demonstrate this with a combination of experiments and computational modeling.

LingLang Lunch (1/29/2014): Anna Shusterman (Wesleyan University)

Language-Thought Interactions in Development

How do language and thought influence each other during development? Drawing on the cases of spatial and numerical cognition, I will discuss recent work from my lab exploring this question. For both cases, I will show evidence of interesting language-thought correspondences that raise questions about the mechanisms through which language and cognition become linked. In the case of space, I will focus on three studies exploring the hypothesis that acquiring frame-of-reference terms (left-right, north-south) causally affects spatial representation in three different populations: English-speaking preschoolers, two cohorts of Nicaraguan Sign Language users, and Kichwa-speaking adults outside of Quito, Ecuador (*Kichwa is a dialect of Quechua spoken in Ecuador). In the case of number, I will focus on emerging evidence that numerical acuity (in the analog magnitude system) and the acquisition of counting knowledge are correlated even in preschoolers. These studies suggest that language acquisition is deeply tied to the development of non-verbal conceptual systems for representing space and number, raising new questions and hypotheses about the roots of this relationship.

LingLang Lunch (2/19/2014): Nathaniel Smith (University of Edinburgh)

Building a Bayesian bridge between the physics and the phenomenology of social interaction

What is word meaning, and where does it live? Both naive intuition and scientific theories in fields such as discourse analysis and socio- and cognitive linguistics place word meanings, at least in part, outside the head: in important ways, they are properties of speech communities rather than individual speakers. Yet, from a neuroscientific perspective, we know that actual speakers and listeners have no access to such consensus meanings: the physical processes which generate word tokens in usage can only depend directly on the idiosyncratic goals, history, and mental state of a single individual. It is not clear how these perspectives can be reconciled. This gulf is thrown into sharp perspective by current Bayesian models of language processing: models of learning have taken the former perspective, and models of pragmatic inference and implicature have taken the latter. As a result, these two families of models, though built using the same mathematical framework and often by the same people, turn out to contain formally incompatible assumptions.

Here, I’ll present the first Bayesian model which can simultaneously learn word meanings and perform pragmatic inference. In addition to capturing standard phenomena in both of these literatures, it gives insight into how the literal meaning of words like “some” can be acquired from observations of pragmatically strengthened uses, and provides a theory of how novel, task-appropriate linguistic conventions arise and persist within a single dialogue, such as occurs in the well-known phenomenon of lexical alignment. Over longer time scales such effects should accumulate to produce language change; however, unlike traditional iterated learning models, our simulated agents do not converge on a sample from their prior, but instead show an emergent bias towards belief in more useful lexicons. Our model also makes the interesting prediction that different classes of implicature should be differentially likely to conventionalize over time. Finally, I’ll argue that the mathematical “trick” needed to convince word learning and pragmatics to work together in the same model is in fact capturing a real truth about the psychological mechanisms needed to support human culture, and, more speculatively, suggest that it may point the way towards a general mechanism for reconciling qualitative, externalist theories of social interaction with quantitative, internalist models of low-level perception and action, while preserving the key claims of both approaches.

LingLang Lunch (2/26/2014): Jeff Runner (University of Rochester)

Binding constraints on processing: pronouns are harder than reflexives (In collaboration with Kellan Head, Teach For America, and Kim Morse, University of Rochester)

In this talk I will present the results of a visual world eye-tracking experiment designed to test two claims in the literature: (a) that the binding theory is a set of “linked” constraints as in the classic binding theory (Chomsky 1981) and HPSG’s binding theory (Sag, Wasow & Bender 2003); and (b) that the binding theory applies as an initial filter on processing (Nicol & Swinney 1989, Sturt 2003). Our results instead support two different claims: (a) that the constraint(s) on pronouns and the constraint(s) on reflexives are separate constraints that apply differently and with different timelines, in line with “primitives of binding” theory, Reuland (2001, 2011); and (b) that neither constraint applies as an initial filter on processing, as in Badecker & Straub (2001). In particular the results show clearly that the resolution of the appropriate antecedent for pronouns is delayed compared to that of reflexives. This project started as an examination of the on-line effects of the constraints of the binding theory, developing an approach based on Nicol & Swinney 1989, Badecker & Straub 2001, and Sturt 2003. Recent work, however, implicates the critical role of memory access in reflexive interpretation (Dillon et al. 2013). Thus, I will also try to relate the current results to current models of memory access.

LingLang Lunch (3/19/2014): Shiri Lev Ari (Max Planck Institute, Nijmegen)

How expectations influence language processing, and their cognitive and social consequences: The case of processing the language of non-native speakers

Non-native language is less reliable in conveying the speakers’ intentions, and listeners know and expect that. I propose that these expectations of lower competence lead listeners to adjust their processing when listening to non-native speakers by increasing their reliance on top-down processes and sufficing with less detailed processing of the language, but only if they have the cognitive resources to do so. I will first show evidence supporting these claims, and then show that the adjustment to non-native speakers temporarily alters the way all language, including one’s own, is processed. I will end by showing one of the social consequences of the adjustment in processing – better perspective-taking when listening to non-native speakers.

LingLang Lunch (4/2/2014): Sheila Blumstein (Brown University)

Variability and Invariance in Speech and Lexical Processing: Evidence from Aphasia and Functional Neuroimaging

The processes underlying both speaking and understanding appear to be easy and seamless. And yet, speech input is highly variable, the lexical form of a word shares its sound shape with many other words in the lexicon, and often a given word has multiple meanings. The goal of this research is to examine how and in what ways the neural system is, on the one hand, sensitive to the variability in the speech and lexical processing system, and, on the other, how it is able to resolve this variability. To this end, we will review recent research investigating how the perceptual system resolves variability in selecting the appropriate word from its competitors and determining what category a sound belongs to, e.g. [d] or [t], and how different acoustic features of sounds, e.g. [d-t] vs. [s-z], map on to a common abstract feature, e.g. voicing. We will then examine how higher level information sources such as semantic and conceptual information are used in perceiving degraded speech. The implications of these findings will be considered for models of the functional and neural architecture of language.