Author Archives: LLL

LingLang Lunch (09/16/2020): Kate Lindsey (Boston University)

Kate Lindsey is an assistant professor in Linguistics at Boston University. She specializes in language description, phonology, and the Pahoturi River languages of Papua New Guinea. She received her Ph.D. from Stanford University for her dissertation titled “Ghost elements in Ende phonology”. For more information, you can find her website here.

Deriving the Pahoturi River vowel space using diachronic and synchronic methods

The Pahoturi River language family of southern New Guinea consists of at least six language varieties. However, only two of these languages have been described phonetically. In this talk, I will describe how I’ve used the comparative method and phonetically transcribed wordlist data to propose vowel spaces for all the varieties in the family. Data will be presented in terms of both F1/F2 quality and length. I’ll also show some puzzling correspondence sets where vowels appear with high variance across the family. This data has implications both for understanding rare processes of vowel harmony in Papuan languages but also for understanding the linguistic history of this understudied region.

Spring 2020 Schedule

Date Speaker Title
1/22/2020 First day of classes
1/29/2020 Joanna Morris (Hampshire College & RISD) Is there a ‘moth’ in mother? How we read complex words (and those that are just pretending to be)
2/19/2020 Maksymilian Dabkowski (Brown) The morphophonology of A’ingae verbal stress
2/26/2020 Casey Lew-Williams (Princeton) Infants learn from meaningful structure in their communicative environments
3/4/2020 Masoud Jasbi (Harvard) The Puzzle of Learning Disjunction
3/11/2020 Lauren Franklin (Brown) TBD
3/17/2020 Rachel Burdin (University of New Hampshire) CANCELLED DUE TO COVID-19 TBD
3/25/2020 Spring Recess
4/1/2020 Athulya Aravind (MIT) CANCELLED DUE TO COVID-19 TBD
4/8/2020 Shiying Yang (Brown) CANCELLED DUE TO COVID-19 TBD
4/22/2020 Youtao Lu (Brown) TBD
4/29/2020 Misha Ali (Brown) TBD TBD

LingLang Lunch (1/29/2020): Joanna Morris (Hampshire College & RISD)

Joanna Morris is a professor of cognitive science in the school of Cognitive Science at Hampshire College is is also teaching at RISD. Her work focuses on the cognitive processes that underlie reading. Her current research is focused on examining how complex words—words with multiple parts like sing-er and un-happy are represented in the mental dictionary. For more information, her website is here.

Is there a ‘moth’ in mother? How we read complex words (and those that are just pretending to be).

Skilled readers identify words with remarkable speed and accuracy, and fluent word identification is a prerequisite for comprehending sentences and longer texts. Although research on word reading has tended to focus on simple words, models of word recognition must nevertheless also be able to account for complex words with multiple parts or morphemes. One theory of word reading is that we break complex words into their component parts depending on whether the meaning of the whole word can be figured out from its components. For example, a ‘pay-ment’ is something (the ‘-ment’ part) that is paid ( the ‘pay-’ part); a ‘ship-ment’ is something that is shipped. However a ‘depart-ment’ is not something that departs! Thus ‘payment’ and ‘shipment’ are semantically transparent, while ‘department’ is semantically opaque. One model of word reading holds that only semantically transparent words are broken down. Other models claim that not only are all complex words —both transparent and opaque—decomposed, but so are words that are not even really complex but only appear to be, i.e. pseudo-complex words such as ‘mother’. My research examines the circumstances under which we break complex words into their component parts and in this talk I will address how this process may be instantiated in the brain.

LingLang Lunch (3/21/2019): Jason Shaw (Yale)

Jason Shaw is an Assistant Professor in the Department of Linguistics and director of the Phonetics Laboratory at Yale University. His research investigates how the continuous dimensions of speech, including the kinematics of speech organs and the resulting acoustics, are structured by phonological form. For more information, his website is here.

Phonological control of time

Speech unfolds in time in ways that are language-specific and seem to be conditioned in part by phonological structure. However, language-specific timing patterns are generally still situated outside the scope of phonological theory. Articulatory Phonology (AP) is an exception in this regard. In AP, language-specific timing patterns are modelled in terms of coordination between articulatory gestures, primitive units of phonological contrast. The network of coordination relations between gestures drive articulatory movements in speech. In this talk, I’ll present two case studies that present apparent challenges to AP and show how the challenges can be resolved. The first case study presents Electromagnetic Articulography data tracking articulatory movements in Mandarin Chinese. The key finding is that the relative timing between consonants and vowels in Mandarin varies systematically with token-to-token variability in the spatial position of the tongue, a pattern which is not expected under feed-forward timing control, as in AP. The second case study is a field-based ultrasound study of lenition in Iwaidja, an Australian aboriginal language. In intervocalic position, velar approximants in Iwaidja variably delete. The challenge for AP is that temporal duration is partially preserved even as the velar consonant is completely lost. Developing a theoretical account of these patterns in AP reveals dimensions over which phonological systems shape language-specific variation in timing.

LingLang Lunch (3/15/2019): Suzi Lima (Toronto)

Suzi Lima is an Assistant Professor in the Linguistics Department and in the Spanish and Portuguese Department at the University of Toronto. Currently, she is interested in the semantics of reference and quantification in Brazilian Portuguese and in Brazilian indigenous languages. For more information, her website is here.

A typology of the count/mass distinction in Brazil and its relevance for count/mass theories.

Since Link’s (1983) seminal contribution, much work has explored the semantics of count and mass nouns from both theoretical and experimental perspectives. In this talk, I explore some of the recent advances in this field, drawing particularly from experimental research and descriptions of understudied Brazilian languages, more specially, Yudja (Juruna family, Tupi Stock). This talk has two main goals. First, I will explore the debate about what can be counted grammatically, that is, how we define atoms and what role extra-linguistic factors may play in this process, focusing on the distinction between natural and semantic atomicity (Rothstein 2010). More specifically, I will show that, in many languages, substance-denoting nouns – predicted to be uncountable in most count/mass theories (cf. Chierchia 1998, 2010) – can interact with the counting system, suggesting that the substance/object distinction might have an impact on what is more likely to be counted, but does not in itself restrict counting. I will also argue that the counting units that we use with object denoting nouns do not always correspond to ‘natural atoms’. Second, I will discuss the results of a large-scale project on the count/mass distinction in 17 Brazilian languages, and how the results of this project can contribute to typological research on this topic.

LingLang Lunch (2/28/2019): Chelsea Sanker (Brown)

Chelsea Sanker is a Visiting Assistant Professor of CLPS at Brown University. Her research aims to understand how phonetic details fit into phonological representations, and how that is reflected in perception and production. For more information, his website is here.

Secondary cues to coda voicing and vowel duration

The production of voicing in coda consonants is reflected in a range of acoustic correlates, driven by articulatory constraints. However, not all of these acoustic characteristics are used as perceptual cues to voicing. I provide an overview of acoustic correlates of voicing in production and present perceptual data demonstrating that only some of these acoustic characteristics influence listeners’ decisions about coda voicing. Furthermore, because some of these acoustic characteristics exist in production but are not perceived as cues to voicing, they are particularly well situated to contribute to secondary voicing-conditioned effects. The second part of my talk examines how some of the same acoustic characteristics that are caused by coda voicing influence perception of vowel duration. In particular, spectral tilt and intensity contour have a large effect on perceived duration; higher spectral tilt and steeper decreases in intensity, as caused by voiceless codas, also decrease the perceived duration of vowels. This relationship provides a possible perceptual pathway for the development of the frequently attested pattern of voicing-conditioned vowel duration.

Spring 2019 Speaker Schedule

Date Speaker Title
1/23/2019 First day of classes
2/7/2019 Ellie Pavlick (Brown) Why should NLP care about linguistics?
2/28/2019 Chelsea Sanker (Brown) Secondary cues to coda voicing and vowel duration
3/15/2019 Suzi Lima (Toronto) A typology of the count/mass distinction in Brazil and its relevance for count/mass theories
3/21/2019 Jason Shaw (Yale) Phonological control of time
3/28/2019 Spring Recess
4/4/2019 Jessi Grieser (The University of Tennessee Knoxville) Talking Place, Speaking Race
4/11/2019 Roger Levy (MIT) Implicit gender bias in preferred linguistic descriptions for expected events
4/25/2019 Anna Bjurman Pautz (Brown University) Two-dimensionalism and Empty names in Propositional Attitude reports
5/2/2019 Lynnette Arnold & Paja Faudree Language and Social Justice: Teaching About the “Word Gap”

LingLang Lunch (2/7/2019): Ellie Pavlick (Brown)

Ellie Pavlick is an Assistant Professor of Computer Science at Brown University and an academic partner with Google AI. She is interested in building better computational models of natural language semantics and pragmatics. For more information, her website is here.

Why should NLP care about linguistics?

In just the past few months, a flurry of adversarial studies have pushed back on the apparent progress of neural networks, with multiple analyses suggesting that deep models of text fail to capture even basic properties of language, such as negation, word order, and compositionality. Alongside this wave of negative results, our field has stated ambitions to move beyond task-specific models and toward “general purpose” word, sentence, and even document embeddings. This is a tall order for the field of NLP, and, I argue, marks a significant shift in the way we approach our research. I will discuss what we can learn from the field of linguistics about the challenges of codifying all of language in a “general purpose” way. Then, more importantly, I will discuss what we cannot learn from linguistics. I will argue that the state-of-the-art of NLP research is operating close to the limits of what we know about natural language semantics, both within our field and outside it. I will conclude with thoughts on why this opens opportunities for NLP to advance both technology and basic science as it relates to language, and the implications for the way we should conduct empirical research.

LingLang Lunch (11/28/2018): Steven Frankland (Princeton)

Steven Frankland is a postdoctoral fellow at the Princeton Neuroscience Institute. He studies the composition of complex thoughts in human’s brain, and how they are related to language production, reasoning and decision making. For more information, his website is here.

Structured re-use of neural representations in language and thought

To understand an unfamiliar event, we must retrieve concepts from memory, and flexibly assemble them to encode structural aspects of the event (e.g., who did what to whom). Imagine, for example, the difference between the sentences “the lawyer needs an accountant” and “the accountant needs a lawyer”. In a series of fMRI studies, we examine how the human brain encodes the different meanings of such sentences, and does so in a way that generalizes to novel combinations. We find two regions, left-mid superior temporal cortex (lmSTC) and medial-prefrontal cortex (mPFC) that re-use relational representations across sentences, supporting composition. However, we find that they do so using different representational strategies. lmSTC encodes abstract semantic role variables (who did it? to whom was it done?) that are re-used across verbs. mPFC, by contrast, encodes narrow noun-verb conjunctions, specific to a particular event-type (who needed something?). The representational re-use in both regions supports composition, but reflects a tradeoff between generalization (lmSTC) and specificity (mPFC). The hippocampus plays a different role, representing recurring conjunctive representations as more dissimilar to one another than expected by chance, consistent with a role in pattern separation. We suggest that these regions play distinct, but complementary, roles in a hierarchical, generative system for composing representations of events.

LingLang Lunch (11/7/2018): Scott Seyfarth (OSU)

Scott Seyfarth is a postdoctoral researcher in the Department of Linguistics at Ohio State University. His research interests are in phonetics, psycholinguistics, and laboratory phonology. For more information, his website is here.

Variable external sandhi in a communication-oriented phonology

In communicative or message-oriented approaches to phonology, variable phonological alternations are licensed according to how they facilitate the robust identification of meaning. I illustrate this approach using two case studies of American English – intervocalic /t/ alternations and nasal place assimilation – and argue that it makes novel predictions about the variable application of these phenomena that are unexpected within alternative usage-based theories that emphasize the role of routinization and repetition in phonological variation. For each case, I present evidence from a corpus study which supports the predictions of a communication-oriented phonology. These results suggest how connected-speech processes might be shaped by functional pressures, and complicate a view of such processes as lenition. More generally, they point toward the need to take into account the communicative function of phonological alternants when describing where and why they are likely to occur.