Chelsea Sanker is a Visiting Assistant Professor of CLPS at Brown University. Her research aims to understand how phonetic details fit into phonological representations, and how that is reflected in perception and production. For more information, his website is here.
Secondary cues to coda voicing and vowel duration
The production of voicing in coda consonants is reflected in a range of acoustic correlates, driven by articulatory constraints. However, not all of these acoustic characteristics are used as perceptual cues to voicing. I provide an overview of acoustic correlates of voicing in production and present perceptual data demonstrating that only some of these acoustic characteristics influence listeners’ decisions about coda voicing. Furthermore, because some of these acoustic characteristics exist in production but are not perceived as cues to voicing, they are particularly well situated to contribute to secondary voicing-conditioned effects. The second part of my talk examines how some of the same acoustic characteristics that are caused by coda voicing influence perception of vowel duration. In particular, spectral tilt and intensity contour have a large effect on perceived duration; higher spectral tilt and steeper decreases in intensity, as caused by voiceless codas, also decrease the perceived duration of vowels. This relationship provides a possible perceptual pathway for the development of the frequently attested pattern of voicing-conditioned vowel duration.
The Oxford Handbook of Ellipsis was just published. It includes a paper by Scott AnderBois entitled “Ellipsis in Inquisitive Semantics” and a paper by Polly Jacobson entitled “Ellipsis in Categorial Grammar“.
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University and an academic partner with Google AI. She is interested in building better computational models of natural language semantics and pragmatics. For more information, her website is here.
Why should NLP care about linguistics?
In just the past few months, a flurry of adversarial studies have pushed back on the apparent progress of neural networks, with multiple analyses suggesting that deep models of text fail to capture even basic properties of language, such as negation, word order, and compositionality. Alongside this wave of negative results, our field has stated ambitions to move beyond task-specific models and toward “general purpose” word, sentence, and even document embeddings. This is a tall order for the field of NLP, and, I argue, marks a significant shift in the way we approach our research. I will discuss what we can learn from the field of linguistics about the challenges of codifying all of language in a “general purpose” way. Then, more importantly, I will discuss what we cannot learn from linguistics. I will argue that the state-of-the-art of NLP research is operating close to the limits of what we know about natural language semantics, both within our field and outside it. I will conclude with thoughts on why this opens opportunities for NLP to advance both technology and basic science as it relates to language, and the implications for the way we should conduct empirical research.