Category Archives: Talks

LingLang Lunch (3/4/2020): Masoud Jasbi (Harvard)

Masoud Jasbi is a postdoctoral fellow in the Department of Linguistics at Harvard University. His research addresses how abstract functional meanings emerge and develop in the child’s mind, and how much do languages vary in the ways they encode functional meaning. For more information, his website is here.


The Puzzle of Learning Disjunction

To understand language, we rely on mental representations of what words mean. What constitutes these representations and how are they learned? To address this question, I discuss the puzzle of learning the disjunction word “or”. I present experimental studies that show preschool children (3-5 years old) can interpret “or” as inclusive disjunction. I also present the results of a corpus annotation study that shows the exclusive interpretation is much more common in child-directed speech. These two findings confirm a puzzle in the current literature: How can children learn the inclusive interpretation of “or” if they rarely hear it? My proposal is that exclusive interpretations in child-directed speech correlate with interpretive cues such as intonation and the semantic consistency of the disjuncts. Applying a supervised machine learning technique, I check the reliability of these cues and demonstrate that an ideal learner can use them to learn both inclusive and exclusive interpretations of disjunction from child-directed speech. Together, these studies provide evidence for a more sophisticated word learning mechanism as well as richer and more context-dependent representations of functional meaning than previously assumed.

LingLang Lunch (2/26/2020): Casey Lew-Williams (Princeton)

Casey Lew-Williams is Associate Professor in the Department of Psychology and director of Baby Lab at Princeton University. He and his lab study domain-general learning mechanisms and specific features of learning environments in order to understand the beginnings of human cognition and their consequences on children’s outcomes. For more information, his website is here.


Infants learn from meaningful structure in their communicative environments

During natural communication, caregivers pitch statistics at infants, and infants figure out what to pay attention to across milliseconds and months. In doing so, they make progress in detecting and then running with meaningful, naturally variable structure in their environments. I will present a few recent studies examining how caregivers package language to infants, how infants process patterns in the complexities of their input, and how infant-adult dyads align their brains and behaviors during natural play. I will also present preliminary analyses suggesting that such alignment is relevant to children’s learning of new information. The data collectively suggest that fine-grained, predictable statistics embedded in everyday communication are key to understanding the dynamic and consequential nature of early learning.

LingLang Lunch (2/19/2020): Maksymilian Dąbkowski (Brown)

Maksymilian Dąbkowski is a senior student concentrating Linguistics at Brown Universty. This is a practice talk for West Coast Conference on Formal Linguistics 36.


The morphophonology of A’ingae verbal stress

Stress assignment in A’ingae (or Cofán, isolate, ISO 639: con) is remarkably complex. In the first investigation into the nature of this complexity, I report on the existence of six distinct accentual patterns associated with verbal morphemes, propose that stress assignment is determined by a combination of phonological and morphological factors, and develop a formal analysis of the data.

I analyze A’ingae stress assignment as determined by factors from two domains: (i) phonological, where I propose a typologically unattested glottal accent assigned at the level of the prosodic foot, and (ii) morphological, with accentual specification of suffixal lexemes. By attributing a part of the observed complexity to independently motivated glottal accent, I reduce the number of distinct lexical specifications needed to explain the six distinct accentual patterns to four suffix types. I further analyze the four different suffix types as an interaction between two binary parameters that characterize each suffix: recessive vs. dominant; and plain vs. prestressing.

The analysis is carried out in the framework of Cophonology Theory, a restrictive Optimality Theoretic approach, which allows for a parsimonious account of complex patterns emergent from interactions between phonology and morphology.

LingLang Lunch (1/29/2020): Joanna Morris (Hampshire College & RISD)

Joanna Morris is a professor of cognitive science in the school of Cognitive Science at Hampshire College is is also teaching at RISD. Her work focuses on the cognitive processes that underlie reading. Her current research is focused on examining how complex words—words with multiple parts like sing-er and un-happy are represented in the mental dictionary. For more information, her website is here.


Is there a ‘moth’ in mother? How we read complex words (and those that are just pretending to be).

Skilled readers identify words with remarkable speed and accuracy, and fluent word identification is a prerequisite for comprehending sentences and longer texts. Although research on word reading has tended to focus on simple words, models of word recognition must nevertheless also be able to account for complex words with multiple parts or morphemes. One theory of word reading is that we break complex words into their component parts depending on whether the meaning of the whole word can be figured out from its components. For example, a ‘pay-ment’ is something (the ‘-ment’ part) that is paid ( the ‘pay-’ part); a ‘ship-ment’ is something that is shipped. However a ‘depart-ment’ is not something that departs! Thus ‘payment’ and ‘shipment’ are semantically transparent, while ‘department’ is semantically opaque. One model of word reading holds that only semantically transparent words are broken down. Other models claim that not only are all complex words —both transparent and opaque—decomposed, but so are words that are not even really complex but only appear to be, i.e. pseudo-complex words such as ‘mother’. My research examines the circumstances under which we break complex words into their component parts and in this talk I will address how this process may be instantiated in the brain.

Brown at the LSA

There are several presentations by Brown students and faculty in this year’s annual meeting for the Linguistic Society of America. Come and meet us!

  • Uriel Cohen Priva, Shiying Yang, and Emily Strand will present their talk on The stability of segmental properties across genre and corpus types in low-resource languages. Thursday, 5 pm, at the Kabacoff room.
  • Ellie Pavlick will talk about What Should Constitute Natural Language “Understanding”? as an invited speaker of this year’s SCiL (Society of Computation in Linguistics). Friday, 11 am, at the Kabacoff room
  • Uriel Cohen Priva will present his poster American English vowels do not reduce to schwa: A corpus study. Friday Morning Plenary Poster Session.
  • Roman Feiman will be a discuss Conceptual and linguistic components of early negation comprehension at Perspectives on Negation: A Cross-Disciplinary Discussion workshop. Saturday, 2:55 pm at Chart B.
  • Uriel Cohen Priva and Emily Gleason will present Increased intensity is mediated by reduced duration in variable consonant lenition. on Saturday, 2 pm at Camp
  • Youtao Lu and James Morgan will present Homophone auditory processing in cross-linguistic perspective on Sunday, 11:30 am at Jackson.

LingLang Lunch (11/20/2019): Daniel Altshuler (Hampshire College)

Daniel Altshuler is an assistant professor of linguistics at Hampshire College. His primary research interests are in the areas of semantics and pragmatics of natural language. His research investigates how compositional semantics interacts with discourse structure and discourse coherence. For more information, his website is here.


Causal reasoning about states

This talk will consider the asymmetry between how we interpret event-event sequences vs. event-state sequences such as:

(1) a. Justin fell down. Ava pushed him (Cause-Effect, Effect-Cause)
b. Ava pushed Justin. He fell down (Cause-Effect)

(2) a. The barn was red. I painted it. (Effect-Cause, Cause-Effect)
b. I painted the barn. It was red. (*Cause-Effect)

(3) a. A child was dead. A police officer shot him while he had his hands up. (Effect-Cause)
b. A police officer shot a child while he had his hands up. #He was dead. (*Cause-Effect)

Notice that (2b) cannot have the causal inference found in (1b) and (3c) is infelicitous. Based on these and other related data, we will consider the view that the coherence relation, Result, is aspectually sensitive in a way that Explanation is not. We will consider some challenges to this view and I will outline some ways to proceed. In the end, we will have a new lens through which to think about narrative progression and narrative regression.

LingLang Lunch (11/7/2019): Judith Kroll (UC Irvine)

Judith Kroll is a Distinguished Professor in the Department of Language Science at the University of California, Irvine. Her research employs bilingualism as a tool to reveal the interplay between language and cognition. For more information, her website is here.


The fate of the native language in second language learning:
A new hypothesis about bilingualism, mind, and brain

In the last two decades there has been an upsurge of research on the bilingual mind and brain. Although the world is multilingual, only recently have cognitive and language scientists come to see that the use of two or more languages provides a unique lens to examine the neural plasticity engaged by language experience. But how? Bilinguals proficient in two languages appear to speak with ease in each language and often switch between the two languages, sometimes in the middle of a sentence. In this last period of research we have learned that the two languages are always active, creating a context in which there is mutual influence and the potential for interference. Yet proficient bilinguals rarely make errors of language, suggesting that they have developed exquisite mechanisms of cognitive control. Contrary to the view that bilingualism adds complication to the language system, the new research demonstrates that all languages that are known and used become part of the same language system. A critical insight is that bilingualism provides a tool for examining aspects of the cognitive and neural architecture that are otherwise obscured by the skill associated with native language performance in monolingual speakers. In this talk I illustrate this approach and consider the consequences that bilingualism holds more generally for cognition and learning.

LingLang Lunch (10/30/2019): Joshua Hartshorne (Boston College)

Joshua Hartshorne is an assistant professor of psychology and the director of Language Learning Lab at Boston College Department of Psychology. His research in language development covers a variety of phenomena in syntax, semantics, and pragmatics, and has lately been focusing on bootstrapping language acquisition, language and common sense, and critical periods. For more information, her website is here.


Critical periods in language, cognitive development, and massive online experiments

Only a few years ago, it was widely accepted that cognitive abilities develop during childhood and adolescence, with cognitive decline beginning at around 20 years old for fluid intelligence and in the 40s for crystalized intelligence. The obvious outlier was language learning, which appeared to begin its decline in early childhood. All these claims have been challenged by a recent flurry of studies — both from my lab and others. In particular, the ability to collect large-scale datasets has brought into sharp relief patterns in the data that were previously indiscernible. The fluid/crystalized intelligence distinction has broken down: at almost any age between 20 and 60, some abilities are still developing, some are at their peak, and some are in decline (Hartshorne & Germine, 2015). Most surprisingly, evidence suggests that the ability to learn syntax is preserved until around 18 (Hartshorne, Tenenbaum, & Pinker, 2018). This has upended our understanding of language learning and its relationship to the rest of cognitive development. In this talk, I review recent published findings, present some more recent unpublished findings, and try to point a path forwards. I also discuss the prospects for massive online experiments not just for understanding cognitive development, but for understanding cognition in general.

LingLang Lunch (10/23/2019): Uriel Cohen Priva (Brown University)

Understanding lenition through its causal structure

Consonant lenition refers to a list of seemingly unrelated processes that are grouped together by their tendency to occur in similar environments (e.g. intervocalically) and under similar conditions (e.g. in faster speech). These processes typically include degemination, voicing, spirantization, approximantization, tapping, debuccalization, and deletion (Hock 1986). So, we might ask: What are the commonalities among all these processes and why do they happen? Different theories attribute lenition to assimilation (Smith 2008), effort-reduction (Kirchner 1998), phonetic undershoot (Bauer 2008), prosodic smoothing (Katz 2016), and low informativity (Cohen Priva 2017). We argue that it is worthwhile to focus on variable lenition (pre-phonologized processes) in conjunction with two phonetic characteristic of lenition: reduced duration and increased intensity. Using mediation analysis, we find causal asymmetries between the two, with reduced duration causally preceding increased intensity. These results are surprising as increased intensity (increased sonority) is often regarded as the defining property of lenition. The results not only simplify the assumptions associated with effort-reduction, prosodic smoothing, and low informativity, but they are also compatible with phonetic undershoot accounts.

LingLang Lunch (10/16/2019): Jeff Mielke (NC State)

Jeff Mielke is professor of the department of English at North Carolina State University. His main research interests include linguistic sound patterns and segmental phonology. For more information, his website is here.


Phonetic studies of vowels in two endangered languages

I report acoustic and articulatory studies of two endangered languages with typologically unusual vowel systems. Bora, a Witotoan language spoken in Peru and Colombia, has been described as having a three-way backness contrast between unrounded high vowels /i ɨ ɯ/. An audio-video investigation of Bora vowels reveals that while none of these vowels are produced with lip rounding, the vowel described as /ɨ/ is actually a front vowel with extreme lingual-dental contact. This appears to be a previously unknown vowel type. Kalasha, a Dardic language spoken in Pakistan, has been described as having 20 vowel phonemes: plain /i e a o u/, nasalized /ĩ ẽ ã õ ũ/, retroflex /i˞ e˞ a˞ o˞ u˞/, and retroflex nasalized /ĩ˞ ẽ˞ ã˞ õ˞ ũ˞/. An ultrasound study of Kalasha vowels shows that the vowels described as retroflex are produced not with retroflexion but with various combinations of tongue bunching and other tongue shape differences, raising questions about if and how these phonetic dimensions should be integrated with notions of basic vowel quality. I discuss implications of the Bora and Kalasha data for models of vowel features.