Monthly Archives: May 2018

LingLang Lunch (3/9/2016): Emily Myers (University of Connecticut)

Non-Native Speech Sound Learning: Studies of Sleep, Brain, and Behavior

Speech perception is subject to critical/sensitive period effects, such that acquisition of non-native (L2) speech sounds is far more difficult in adulthood than in childhood. Although adults can be trained to perceive differences among speech sounds that are not part of their native language, success is (1) variable across individuals, (2) variable across specific sounds to be learned, and (3) training may or may not generalize to untrained instances. Any theory of L2 speech perception must explain these three phenomena. Accounts of the L2 speech learning process have drawn from traditions in linguistics, psychology, and neuroscience, yet a full description of the barriers to perceptual learning of L2 sounds remains elusive. New evidence from our lab suggests that training on non-native speech produces plastic effects in the brain regions involved in native-language perception, and that consolidation during sleep plays a large role in the degree to which training is maintained and generalizes to new talkers. Further, similar mechanisms may be at play when listeners learn to perceive non-standard tokens in the context of accented speech. Taken together, these findings suggest that speech perception is more plastic than critical period accounts would predict and that individual variability in brain structure and sleep behavior may predict some of the variability in ultimate L2 sound acquisition success.

Colloquium (3/23/2016): Jean E. Fox Tree (University of California, Santa Cruz)

The Usefulness of Useless Utterances: Why Um, Like, and Other Disparaged Phenomena are not Superfluous

Spontaneous communication differs from prepared communication in what is said, how it is said, and how talk develops based on addressee responses. Spontaneously produced phenomena such as ums, likes, and rising intonation on declarative sentences, or uptalk, are often vilified, but they have specific functions. In addition to what is said and how it is said, spontaneous communication involves responding to contributions from interlocutors. Even the shortest of addressee responses, such as the choice between uh huh versus oh, affects speaker production and overhearer comprehension. Differences between quotation devices, such as said versus like, also reflect functional choices. Because many spontaneous phenomena do not appear in carefully constructed communication, there has been a mistaken conclusion that they are uninformative. In fact, however, spontaneous phenomena are solutions to problems encountered in unplanned, unrehearsed communication.

LingLang Lunch (4/6/2016): Matthew Barros (Yale University)

Sluicing and Ellipsis Identity

This talk focuses on sluicing constructions, the ellipsis of TP in a Wh-question leaving a Wh-phrase “remnant” overt. Sluicing is subject to an identity condition that must hold between the sluiced question and its antecedent. There is currently no consensus on whether this condition should be characterized as syntactic or semantic in nature, or whether a hybrid condition that makes reference to both semantic and syntactic identity is needed (Merchant 2005, Chung 2013, Barker 2013). I provide a new identity condition that captures extant syntactic generalizations while allowing for enough wiggle room to let in detectible mismatches between the antecedent and sluice. The new identity condition also lets in “pseudosluices” alongside isomorphic sluices, where the sluiced question is a cleft or a copular question while the antecedent is not. Pseudosluicing has often been proposed as a last resort mechanism, only available when an isomorphic structure is independently ruled out (Rodrigues et al. 2009, Vicente 2008, van Craenenbroeck 2010). I defend a view where pseudosluicing is not a special case of sluicing, so that the identity condition should not distinguish between copular and non-copular clauses in the determination of identity. The new Identity condition achieves this in making no reference to the syntactic content of the ellipsis site.

LingLang Lunch (4/20/2016): Eiling Yee (University of Connecticut)

Putting Concepts in Context

At first glance, conceptual representations (e.g., our internal notion of the object lemon) seem static. That is, we have the impression that there is something that lemon “means” (a sour, yellow, football-shaped, citrus fruit) and that this meaning does not vary. Research in semantic memory has traditionally taken this “static” perspective. In this talk I will describe studies that challenge this perspective by showing that the context that an individual brings with them (via their current goals, recent experience, long-term experience, or neural degeneration) influences the cognitive and neural instantiations of object concepts. I will argue that our findings support models of semantic memory in which rather than being static, conceptual representations are dynamic and shaped by experience.

LingLang Lunch (4/29/2016): Florian Jaeger (University of Rochester)

From processing to language change and cross-linguistic distributions

I’ll present recent attempts to contribute to a wee little question in linguistics: the role of ‘language use’ in language change and, as a consequence, the cross-linguistic distribution of linguistic properties. Specifically, I focus on the extent to which communicate and processing biases shape language. I hope to demonstrate how advances in computational psycholinguistics can contribute to this question: advances in our empirical and theoretical understanding of the biases operating during language production/understanding allows more predictive and principled notions of language use; advances in the empirical methods allow us to more directly test hypotheses about not only whether, but also how these biases come to shape aspects of grammar.

I’ll present a medley of case studies on this question, which hopefully will make for some interesting discussion. I’ll begin with a computational study on the syntax of five languages: do the grammars of these languages order information in such a way that makes the language easier to process than expected by chance (Gildea & Jaeger, 2015)? I then present work on miniature artificial language learning to show that the biases we observe in the first study operate during language acquisition, and that they are strong enough to bias learners to deviate from the input language towards languages that are easier to process and encode information more efficiently (Fedzechkina, Jaeger, & Newport, 2012; Fedzechkina, Newport, & Jaeger, 2016; Fedzechkina & Jaeger, under review). Time permitting, I’ll also show how related biases might cause change within a speaker’s production through that speaker’s life time (suggesting a second path through which language processing can affect language change, Buz, Tanenhaus, & Jaeger, 2016). Alternatively, I can show how adaptive processes during language understanding continuously reshape our linguistic representations throughout our life (Fine, Jaeger, Farmer & Qian, 2013; Kleinschmidt & Jaeger, 2015), including the acquisition of new (e.g., dialectal) syntax (Fraundorf & Jaeger, under review). Come prepared to vote (and to be over voted).

LingLang Lunch (9/28/2016): Andy Wedel (University of Arizona)

Functional pressure from the lexicon shapes phoneme inventory evolution

A language’s sound system must provide for perceptual contrast between different morphemes in order to communicate meaning distinctions. Here I present evidence that lexical competition induces phonetically specific hyperarticulation of individual words, and that this effect in turn influences long-term change in the system of phonemic contrasts.

Starting at the level of long-term language change, we find that the number of minimal lexical pairs that a phoneme contrast distinguishes strongly predicts whether a change to that phoneme contrast preserves or eliminates lexical distinctions. Specifically, phoneme contrasts that distinguish few minimal pairs are more likely to merge (a change that eliminates lexical distinctions), while those that distinguish many minimal pairs are more likely to participate in chain-shifts or phoneme splits (changes that preserve lexical distinctions).

In one proposed mechanism for this effect, hyperarticulation of phonetic cues distinguishing words creates within-category, ‘cryptic’ variation in phoneme categories, which in turn shapes future patterns of sound change. At the level of usage, this model predicts that we should find hyperarticulation of phonetic cues that provide more information distinguishing their host word from a competitor. In support of this prediction, I show evidence that in a corpus of natural English speech, two distinct types of phonetic cues, voice-onset-time, and vowel-vowel Euclidean distance, are hyperarticulated when they distinguish their host word from a minimal pair competitor (e.g., pat ~ bat). Taken together, these results provide strong converging evidence that hyperarticulation of phonetic cues to lexical meaning in usage indirectly promotes maintenance of a communicatively efficient system of phoneme contrasts over time.

LingLang Lunch (10/19/2016): Matt Masapollo (Brown University)

On the nature of the natural referent vowel bias

Considerable research on cross-language speech perception has shown that that perceivers (both adult and infant) are universally biased toward the extremes of articulatory/acoustic vowel space (peripheral in F1/F2 vowel space; Polka & Bohn, 2003, 2011). Much of the evidence for this bias comes from studies showing that perceivers consistently discriminate vowels in an asymmetric manner. More precisely, perceivers perform better at detecting a change from a relatively less (e.g., /e/) to a relatively more peripheral vowel (e.g., /i/), compared to the same change presented in the reverse direction. Although the existence of this perceptual phenomenon (i.e., the natural referent vowel [NRV] bias) is well established, the processes that underlie it remain poorly understood. One account of the NRV bias, which derives from the Dispersion–Focalization Theory (Schwartz et al., 2005), is that extreme vocalic articulations give rise to acoustic vowel signals that exhibit increased spectral salience due to formant frequency convergence, or “focalization.” In this talk, I will present a series of experiments aimed at assessing whether adult perceivers are indeed sensitive to differences in formant proximity while discriminating vowel stimuli that fall within a given category, and, if so, whether that sensitivity is attributable to general properties of auditory processing, or to phonetic processes that extract articulatory information available across sensory modalities. In Experiment 1, English- and French-speaking perceivers showed directional asymmetries consistent with the focalization account as they attempted to discriminate synthetic /u/ variants that systematically differed in their peripherality and hence degree of formant proximity (between F1 and F2). In Experiment 2, similar directional effects were found when English- and French-speaking perceivers attempted to discriminate natural /u/ productions that differed in their articulatory peripherality when only acoustic-phonetic or only visual-phonetic information was present. Experiment 3 investigated whether and how the integration of acoustic and visual speech cues influences the effects documented in Experiment 2. When acoustic and visual cues were phonetically-congruent, an NRV bias was observed. In contrast, when acoustic and visual cues were phonetically-incongruent, this bias was disrupted, confirming that both sensory channels shape this bias in bimodal auditory-visual vowel perception. Collectively, these findings suggest that perceivers are universally biased to attend to extreme vocalic gestures specified optically, in terms of articulatory kinematic patterns, as well as acoustically, in terms of formant convergence patterns. A complete understanding of this bias is not only important to speech perception theories, but provides a critical basis for the study of phonetic development as well as the perceptual factors that may constrain vowel inventories across languages.

Colloquium (11/2/2016): Valentine Hacquard (University of Maryland)

Grasping at Factivity

Speakers mean more than their sentences do, because they can take a lot about their audience for granted. This talk explores how presuppositions and pragmatic enrichments play out in acquisition. How do children untangle semantic from pragmatic contributions to what speakers mean? The case study I will focus on is how children learn the meaning of the words think and know. When and how do children figure out that think but not know can be used to report false beliefs? When and how do they figure out that with know, but not think, speakers tend to presuppose the truth of the complement clause? I will suggest that the path of acquisition is traced by the child’s understanding both of where such verbs occur, and of why speakers use them. (joint work with Rachel Dudley and Jeff Lidz)

LingLang Lunch (11/9/2016): Chelsea Sanker (Brown University)

Phonetic Convergence across measures and across speakers

Speakers have a tendency to sound increasingly like their interlocutors, a phenomenon called phonetic convergence, which is observed in a range of linguistic characteristics (e.g. vowel formants, Babel 2012; intensity, Gregory and Hoyt 1982; timing of conversational turns and pauses, Street 1984). In this talk, I use data from a range of phonetic measures to shed light on whether individuals exhibit tendencies for convergence across different characteristics, in different tasks, and with different partners.

Convergence varies across individuals, but each individual has some consistency in convergence exhibited in her conversations, compared across measures, both when interacting with different partners and when undertaking different tasks with the same partner. This correlation was present in phonological measures (vowel formants) and prosodic measures (intensity, pitch, phonation), but was not significant for turn-taking and speech rate patterns.

Convergence varies across measures, but there was no significant correlation between convergence in different measures; patterns exhibited by a speaker in one measure are not predictive of her patterns in other measures.

These results indicate that convergence results in one measure will not necessarily be representative of what would be found in other measures, which has implications for designing convergence research and for interpreting results. Moreover, it suggests that the process underlying convergence in different characteristics is not equivalent, but may be mediated by individual differences in attention or other aspects of phonological processing or storage.

LingLang Lunch (11/16/2016): Uriel Cohen Priva (Brown University)

An interplay between information, duration, and lenition

What are the causal mechanisms that lead to lenition, and why do languages tend to lenite particular segments? Predicting the actuation of lenition should explain why lenition tends to be exceptionless, why it may result in more effortful articulatory production, why it seems to ignore certain merger-avoidance properties that exist elsewhere in language (Wedel, Kaplan, and Jackson 2013), and why different language varieties may be similar in leniting the same segment, but differ in the output (e.g. /t/ to /ʔ/ vs. /t/ to /ɾ/). I propose that all these properties can be explained if we assume that lenition is caused by reduction in duration, and that one of the leading factors in systematic reduction in duration is low segment informativity. I show that low informativity is correlated with shorter duration in American English, above and beyond contextual predictability. I show that cross-linguistically, relative lower informativity of a particular segment matches the languages in which the selective lenition of particular segments occurs. I further show that lenition processes match duration-reduction better than they match undershoot alone, or effort-reduction alone, and that durational reduction occurs in leniting environments even when lenition itself does not happen.