Tag Archives: Psycholinguistics

LingLang Lunch (4/16/2014): Lee Edelist (Brown University)

Pronoun resolution in multi-utterance discourse

This talk will look at several pragmatic theories of pronoun resolution (Centering, Accessibility theory and Coherence-based theory), will identify how they complement each other, and point out what is still not accounted for. Mainly, all of these theories focus on examples that consist of one or two consecutive utterances. Natural discourse, though, is not limited to two-utterance stints. I’ll show that when co-reference is carried over multiple utterances, the rules are a bit different than what has been described thus far. Finally, I’ll discuss a proposed study that will observe readers’ judgments of the identity of pronoun referents in conditions when different theories hold conflicting predictions.

LingLang Lunch (10/1/2014): Sara Guediche (Brown University)

Flexible and adaptive processes in speech perception
<divThe perception of speech depends on mapping a highly variable and complex acoustic signal onto meaningful sounds and words. Yet, listeners perform this task with seemingly little effort. Accurate perception relies on the integration of both the acoustic speech signal as well as other sources of information derived from the context; identical sounds (e.g., ambiguous phonetic categories) can be heard differently depending on the context (e.g., lexical information). Perception is not only flexible enough to accommodate distortions in the speech signal but can also adapt to accommodate systematic distortions and deviations in the acoustic speech signal with exposure; for example, an unintelligible speaker with a strong foreign accent can become better understood over time. How does perception maintain such flexible and adaptive processing without affecting the stable long-term speech representations? I will present a few studies in which we examined the influence of different sources of information on perception and adaptive plasticity in order to gain insight into this question.

LingLang Lunch (11/5/2014): Chigusa Kurumada (University of Rochester)

Expectation-adaptation in the incremental interpretation of English contrastive prosody (In collaboration with Meredith Brown, Tufts University/Massachusetts General Hospital, and Michael K. Tanenhaus, University of Rochester)

The realization of prosody varies across speakers, accents, and speech conditions (e.g., Ladd, 2008). Listeners must navigate this variability to converge on consistent prosodic interpretations. We investigate whether listeners adapt to speaker-specific realization of prosody based on recent exposure and, if so, whether such adaptation is rapidly integrated with online pragmatic processing. To this end, we investigate contrastive focus, which can signal that pragmatic inference is required to determine speaker meaning (e.g., Ito & Speer, 2008; Pierrehumbert & Hirschberg, 1990; Watson et al., 2008).

In this talk, I first present results of an off-line judgement experiment using a paradigm developed to investigate implicit learning in phoneme categorization (e.g., Kraljic, Samuel & Brennan, 2008; Norris, McQueen & Cutler, 2003). The results suggest that listeners rapidly adapt their pragmatic interpretation of contrastive focus to best reflect speaker-specific realizations of prosodic cues (e.g., pitch and segment duration). I then discuss results from two eye-tracking experiments. We find that changes in the reliability of prosodic cues (estimated based on recent exposure) are reflected in changes in processing time-course: When a contrastive focus is deemed unreliable as a cue to a contrastive interpretation, listeners effectively down-weight it in their comprehension of following utterances. We conclude that such rapid recalibration of prosodic interpretations enables listeners to achieve robust online pragmatic interpretation of highly variable prosodic information.

LingLang Lunch (2/4/2015): Philip Hofmeister (Brown University)

Expectations and linguistic acceptability judgments

A growing and convergent body of evidence points to the role of expectations in online language processing and learning. This evidence includes data which indicate that processing efficiency for various sentential constructions can be improved by making them more expected (viz., more frequent) in a linguistic context (Wells et al 2009; Fine et al 2013). Here, I consider how expectations bear on acceptability judgments and, more specifically, shifts in acceptability judgment patterns. The hypothesis under consideration is that acceptability judgment responses reflect expectations based on previous experience. A prediction of such a hypothesis is that judgments for such constructions are mutable. In a series of acceptability tasks, I illustrate that participants systematically alter their responses over the course of the experiment, such that relatively unacceptable constructional variants improve with repetition. This holds across a range of data including sentences with case errors, resumptive pronouns, island violations, center-embeddings, and more. I will construe this to mean that judgments, like a variety of other response types, are sensitive to probabilistic factors and I will point to the implications of such findings for our understanding of grammatical change.

Colloquium (9/9/2015): Boaz Keysar (University of Chicago)

Living in a Foreign Tongue

Hundreds of millions of people live and work while using a language that is not their native tongue. Given that using a foreign language is more difficult than using a native tongue, one would expect an overall deleterious effect on their mental and physical performance. We have discovered that the opposite is often true. We argue that a foreign language provides psychological and emotional distance, thereby allowing people to be less biased in their decision-making, more willing to take smart risks and to be guided more by hope than by fear of loss. We show that a foreign language also affects ethical behavior such as cheating and moral choice. But we also find that when emotions are crucial for learning from experience, native tongue is crucial for improving choice over time. Living and functioning in a foreign tongue, then, has surprising consequences for how individuals think, feel and operate, and it has important implications for social policy, negotiation, diplomacy and immigration issues.

Colloquium (9/16/2015): Edward Gibson (MIT)

Information theoretic approaches to language universals

Finding explanations for the observed variation in human languages is the primary goal of linguistics, and promises to shed light on the nature of human cognition. One particularly attractive set of explanations is functional in nature, holding that language universals are grounded in the known properties of human information processing. The idea is that grammars of languages have evolved so that language users can communicate using sentences that are relatively easy to produce and comprehend. In this talk, I summarize results from explorations into several linguistic domains, from an information-processing point of view.

First, we show that all the world’s languages that we can currently analyze minimize syntactic dependency lengths to some degree, as would be expected under information processing considerations. Next, we consider communication-based origins of lexicons and grammars of human languages. Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity. Contrary to Chomsky, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature. Furthermore, word lengths are optimized on average according to predictability in context, as would be expected under and information theoretic analysis. Then we show that language comprehension appears to function as a noisy channel process, in line with communication theory. Given si, the intended sentence, and sp, the perceived sentence we propose that people maximize P(si | sp ), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp ). We discuss how thinking of language as communication in this way can explain aspects of the origin of word order, most notably that most human languages are SOV with case-marking, or SVO without case-marking.

Colloquium (11/11/2015): Evelina Fedorenko (Mass General Hospital/Harvard Medical School)

The L​anguage ​N​etwork and ​I​ts ​P​lace within ​t​he ​B​roader Architecture of the Human Mind and Brain

Although many animal species have the ability to generate complex thoughts, only humans can share such thoughts with one another, via language. My research aims to understand i) the system that supports our linguistic abilities, including its neural implementation, and ii) its interfaces with the rest of the human cognitive arsenal. I will begin by introducing the “language network”, a set of interconnected brain regions that support language comprehension and production. With a focus on the subset of this network dedicated to high-level linguistic processing, I will then consider two questions. First, what is the internal structure of the language network? In particular, do different brain regions preferentially process different levels of linguistic structure (e.g., sound structure vs. syntactic/semantic compositional structure)? And second, how does the language network interact with other large-scale networks in the human brain, like the domain-general cognitive control network or the network that supports social cognition? To tackle these questions, I use behavioral, fMRI, and genotyping methods in healthy adults, as well as intracranial recordings from the cortical surfaces in humans undergoing presurgical mapping (ECoG), and studies of patients with brain damage. I will argue that: i) Linguistic representations are distributed across the language network, with no evidence for segregation of distinct kinds of linguistic information (i.e., phonological, lexical, and combinatorial – syntactic/semantic – information) in distinct regions of the network. Even aspects of language that have long been argued to preferentially rely on a specific region within the language network (e.g., syntactic processing being localized to parts of Broca’s area) turn out to be distributed across the network when measured with sufficiently sensitive tools. Further, the very same regions that are sensitive to high-level (e.g., syntactic) structure in language show sensitivity to lower-level (e.g., phonotactic) regularities. This picture is in line with much current theorizing in linguistics and the available behavioral psycholinguistic data that shows sensitivity to contingencies spanning sound-, word- and phrase-level structure. And: ii) The language network necessarily interacts with other large-scale networks, including prominently the domain-general cognitive control system. Nevertheless, the two systems appear to be functionally distinct given a) the differences in their functional response profiles (selective responses to language vs. responses to difficulty across a broad range of tasks), and b) distinct patterns of functional correlations. My ongoing work aims to characterize the computations performed by these systems – and other systems supporting high-level cognitive abilities – in order to understand the division of labor among them during language comprehension and production.

LingLang Lunch (3/9/2016): Emily Myers (University of Connecticut)

Non-Native Speech Sound Learning: Studies of Sleep, Brain, and Behavior

Speech perception is subject to critical/sensitive period effects, such that acquisition of non-native (L2) speech sounds is far more difficult in adulthood than in childhood. Although adults can be trained to perceive differences among speech sounds that are not part of their native language, success is (1) variable across individuals, (2) variable across specific sounds to be learned, and (3) training may or may not generalize to untrained instances. Any theory of L2 speech perception must explain these three phenomena. Accounts of the L2 speech learning process have drawn from traditions in linguistics, psychology, and neuroscience, yet a full description of the barriers to perceptual learning of L2 sounds remains elusive. New evidence from our lab suggests that training on non-native speech produces plastic effects in the brain regions involved in native-language perception, and that consolidation during sleep plays a large role in the degree to which training is maintained and generalizes to new talkers. Further, similar mechanisms may be at play when listeners learn to perceive non-standard tokens in the context of accented speech. Taken together, these findings suggest that speech perception is more plastic than critical period accounts would predict and that individual variability in brain structure and sleep behavior may predict some of the variability in ultimate L2 sound acquisition success.

Colloquium (3/23/2016): Jean E. Fox Tree (University of California, Santa Cruz)

The Usefulness of Useless Utterances: Why Um, Like, and Other Disparaged Phenomena are not Superfluous

Spontaneous communication differs from prepared communication in what is said, how it is said, and how talk develops based on addressee responses. Spontaneously produced phenomena such as ums, likes, and rising intonation on declarative sentences, or uptalk, are often vilified, but they have specific functions. In addition to what is said and how it is said, spontaneous communication involves responding to contributions from interlocutors. Even the shortest of addressee responses, such as the choice between uh huh versus oh, affects speaker production and overhearer comprehension. Differences between quotation devices, such as said versus like, also reflect functional choices. Because many spontaneous phenomena do not appear in carefully constructed communication, there has been a mistaken conclusion that they are uninformative. In fact, however, spontaneous phenomena are solutions to problems encountered in unplanned, unrehearsed communication.

LingLang Lunch (4/20/2016): Eiling Yee (University of Connecticut)

Putting Concepts in Context

At first glance, conceptual representations (e.g., our internal notion of the object lemon) seem static. That is, we have the impression that there is something that lemon “means” (a sour, yellow, football-shaped, citrus fruit) and that this meaning does not vary. Research in semantic memory has traditionally taken this “static” perspective. In this talk I will describe studies that challenge this perspective by showing that the context that an individual brings with them (via their current goals, recent experience, long-term experience, or neural degeneration) influences the cognitive and neural instantiations of object concepts. I will argue that our findings support models of semantic memory in which rather than being static, conceptual representations are dynamic and shaped by experience.