The Puzzle of Learning Disjunction
Infants learn from meaningful structure in their communicative environments
The morphophonology of A’ingae verbal stress
I analyze A’ingae stress assignment as determined by factors from two domains: (i) phonological, where I propose a typologically unattested glottal accent assigned at the level of the prosodic foot, and (ii) morphological, with accentual specification of suffixal lexemes. By attributing a part of the observed complexity to independently motivated glottal accent, I reduce the number of distinct lexical specifications needed to explain the six distinct accentual patterns to four suffix types. I further analyze the four different suffix types as an interaction between two binary parameters that characterize each suffix: recessive vs. dominant; and plain vs. prestressing.
The analysis is carried out in the framework of Cophonology Theory, a restrictive Optimality Theoretic approach, which allows for a parsimonious account of complex patterns emergent from interactions between phonology and morphology.
Causal reasoning about states
This talk will consider the asymmetry between how we interpret event-event sequences vs. event-state sequences such as:
(1) a. Justin fell down. Ava pushed him (Cause-Effect, Effect-Cause)
b. Ava pushed Justin. He fell down (Cause-Effect)
(2) a. The barn was red. I painted it. (Effect-Cause, Cause-Effect)
b. I painted the barn. It was red. (*Cause-Effect)
(3) a. A child was dead. A police officer shot him while he had his hands up. (Effect-Cause)
b. A police officer shot a child while he had his hands up. #He was dead. (*Cause-Effect)
Notice that (2b) cannot have the causal inference found in (1b) and (3c) is infelicitous. Based on these and other related data, we will consider the view that the coherence relation, Result, is aspectually sensitive in a way that Explanation is not. We will consider some challenges to this view and I will outline some ways to proceed. In the end, we will have a new lens through which to think about narrative progression and narrative regression.
The fate of the native language in second language learning:
A new hypothesis about bilingualism, mind, and brain
In the last two decades there has been an upsurge of research on the bilingual mind and brain. Although the world is multilingual, only recently have cognitive and language scientists come to see that the use of two or more languages provides a unique lens to examine the neural plasticity engaged by language experience. But how? Bilinguals proficient in two languages appear to speak with ease in each language and often switch between the two languages, sometimes in the middle of a sentence. In this last period of research we have learned that the two languages are always active, creating a context in which there is mutual influence and the potential for interference. Yet proficient bilinguals rarely make errors of language, suggesting that they have developed exquisite mechanisms of cognitive control. Contrary to the view that bilingualism adds complication to the language system, the new research demonstrates that all languages that are known and used become part of the same language system. A critical insight is that bilingualism provides a tool for examining aspects of the cognitive and neural architecture that are otherwise obscured by the skill associated with native language performance in monolingual speakers. In this talk I illustrate this approach and consider the consequences that bilingualism holds more generally for cognition and learning.
Critical periods in language, cognitive development, and massive online experiments
Only a few years ago, it was widely accepted that cognitive abilities develop during childhood and adolescence, with cognitive decline beginning at around 20 years old for fluid intelligence and in the 40s for crystalized intelligence. The obvious outlier was language learning, which appeared to begin its decline in early childhood. All these claims have been challenged by a recent flurry of studies — both from my lab and others. In particular, the ability to collect large-scale datasets has brought into sharp relief patterns in the data that were previously indiscernible. The fluid/crystalized intelligence distinction has broken down: at almost any age between 20 and 60, some abilities are still developing, some are at their peak, and some are in decline (Hartshorne & Germine, 2015). Most surprisingly, evidence suggests that the ability to learn syntax is preserved until around 18 (Hartshorne, Tenenbaum, & Pinker, 2018). This has upended our understanding of language learning and its relationship to the rest of cognitive development. In this talk, I review recent published findings, present some more recent unpublished findings, and try to point a path forwards. I also discuss the prospects for massive online experiments not just for understanding cognitive development, but for understanding cognition in general.
Phonetic studies of vowels in two endangered languages
I report acoustic and articulatory studies of two endangered languages with typologically unusual vowel systems. Bora, a Witotoan language spoken in Peru and Colombia, has been described as having a three-way backness contrast between unrounded high vowels /i ɨ ɯ/. An audio-video investigation of Bora vowels reveals that while none of these vowels are produced with lip rounding, the vowel described as /ɨ/ is actually a front vowel with extreme lingual-dental contact. This appears to be a previously unknown vowel type. Kalasha, a Dardic language spoken in Pakistan, has been described as having 20 vowel phonemes: plain /i e a o u/, nasalized /ĩ ẽ ã õ ũ/, retroflex /i˞ e˞ a˞ o˞ u˞/, and retroflex nasalized /ĩ˞ ẽ˞ ã˞ õ˞ ũ˞/. An ultrasound study of Kalasha vowels shows that the vowels described as retroflex are produced not with retroflexion but with various combinations of tongue bunching and other tongue shape differences, raising questions about if and how these phonetic dimensions should be integrated with notions of basic vowel quality. I discuss implications of the Bora and Kalasha data for models of vowel features.
The link between syllabic nasals and glottal stops in American English
Examples of syllabic nasals in English abound in phonological studies (e.g., Hammond 1999, Harris 1994, Wells 1995), but there is little explicit discussion about the surrounding consonant environments that condition syllabic nasals. In this talk, we examine the production of potential word-final syllabic nasals in American English following preceding consonants including oral stops, glottal stops, fricatives, flap, and laterals. The data come from a laboratory study of read speech with speakers from New York and other regions. Acoustic analysis indicates that [n̩] is only prevalent after [ʔ], with some extension to /d/. The results suggest that /ən/ is the appropriate underlying representation for syllabic nasals, and an articulatory sketch to account for the prevalence of [n̩] after coronal stops is laid out. To provide a link between the [ʔ] allophone and syllabic nasals, previous analyses of acoustic enhancement proposed for glottally-reinforced [tʔ] in coda position (e.g. Keyser and Stevens 2006) are extended to the syllabic nasal case.
|9/4/2019||–||First day of classes|
|9/18/2019||Stefan Kaufmann (UConn)||How fake is fake Past?|
|10/2/2019||Lisa Davidson (NYU)||The link between syllabic nasals and glottal stops in American English|
|10/16/2019||Jeff Mielke (NC State)||Phonetic studies of vowels in two endangered languages|
|10/23/2019||Uriel Cohen Priva (Brown)||Understanding lenition through its causal structure|
|10/30/2019||Joshua Hartshorne (Boston College)||Critical periods in language, cognitive development, and massive online experiments|
|11/7/2019 (Thursday at 10am)||Judith Kroll (UC Irvine)||The fate of the native language in second language learning: A new hypothesis about bilingualism, mind, and brain|
|11/13/2019||Kate Lindsey (BU) (cancelled)||TBD|
|11/20/2019||Daniel Altshuler (Hampshire College)||Causal reasoning about states|
|12/4/2019||Roman Feiman (Roman)||TBD|