Tag Archives: Acquisition

Colloquium (10/17/2012): Eugene Charniak (Brown University)

Bayes’ Law as Psychologically Real

Since the brain manipulates probabilities (I will argue) then it should do so according to Bayes’ Law. After all, it is normative, and Darwin would not expect us to do anything less. Furthermore there is a lot to be learned from taking Bayes seriously. I consider myself a nativist despite my statistical bent, and it tells me how to combine an informative prior with the evidence of our senses – compute the likelihood of the evidence. It then tells us that this likelihood must be a very broad generative model of everything we encounter. Lastly, since Bayes says nothing about how to do any of this, I presume that the computational methods themselves are not learned, they must be innate, and I will argue there seems to be very few options on how this can be done, with something like particle filtering being one of the few. I will illustrate these ideas with work in computational linguistics, both my own and that of others.

Colloquium (11/1/2012): Terry Au (University of Hong Kong)

Access to Childhood Language Memory

All adults seem to have amnesia about much that happened in their childhood. Does early memory simply wither away through massive synaptic pruning and cell death in early brain development? Or, is it just masked by interference from later experience? This talk explores these questions in the specific case of childhood language memory. Research into the re-learning of long-disused childhood languages turns out to have much to offer. It provides relatively objective evidence for access to early childhood memory in adulthood via re-learning. It complements linguistic deprivation research to highlight the special status of childhood language experience in phonology and morphosyntax acquisition. It thereby suggests a strategy to salvage seemingly forgotten childhood languages, which are often also heritage languages. Equally importantly, re-learning childhood languages may well open a window onto how language affects cognitive development not only during, but also well beyond, the childhood years.

LingLang Lunch (1/29/2014): Anna Shusterman (Wesleyan University)

Language-Thought Interactions in Development

How do language and thought influence each other during development? Drawing on the cases of spatial and numerical cognition, I will discuss recent work from my lab exploring this question. For both cases, I will show evidence of interesting language-thought correspondences that raise questions about the mechanisms through which language and cognition become linked. In the case of space, I will focus on three studies exploring the hypothesis that acquiring frame-of-reference terms (left-right, north-south) causally affects spatial representation in three different populations: English-speaking preschoolers, two cohorts of Nicaraguan Sign Language users, and Kichwa-speaking adults outside of Quito, Ecuador (*Kichwa is a dialect of Quechua spoken in Ecuador). In the case of number, I will focus on emerging evidence that numerical acuity (in the analog magnitude system) and the acquisition of counting knowledge are correlated even in preschoolers. These studies suggest that language acquisition is deeply tied to the development of non-verbal conceptual systems for representing space and number, raising new questions and hypotheses about the roots of this relationship.

LingLang Lunch (4/30/2014): Jill Thorson (Brown University)

How intonation interacts with new and given information to guide attention

Toddlers are sensitive to native language rhythm and pitch patterns. From the beginnings of production, they approximate adult-like intonation contours and align them with appropriate semantic/pragmatic intentions. The motivation for our study is to investigate how English-acquiring 18-month-olds are guided by mappings from intonation to information structure during on-line reference resolution in a discourse. We ask whether specific pitch movements (deaccented/monotonal/bitonal) more systematically predict patterns of attention depending on the referring condition (new/given). Additionally, this experiment isolates the role of pitch in directing attention, keeping duration and intensity constant across conditions. Contrary to previous work, results show longer looking times to the target over a distractor in the deaccented condition if the referent is new to the discourse but not if it is given. Also, the bitonal pitch movement directs attention to the target even when it is given in the discourse. Thus, pitch type is interacting with new and given information in directing toddler attention. Analyzing how higher-level components combine to direct attention to a referent during discourse aids in explaining the mechanisms that are important for language and word learning.

LingLang Lunch (11/4/2015): Matt Hall (University of Connecticut)

Keeping the hands in mind: Executive function and implicit learning in deaf children

The hands can reveal a lot about the mind. In particular, sign language manifests the human capacity for language in a distinct way, and provides unique opportunities to ask both basic and translational questions about language and cognition. In this talk, I look to Deaf native signers as a way of testing recent claims about the impact of auditory deprivation on cognitive development in two domains: executive function and implicit learning. Results are inconsistent with the auditory deprivation hypothesis, but consistent with the language deprivation hypothesis. I’ll then consider the translational implications of these findings, identify remaining gaps in our empirical knowledge, and discuss my plans for addressing those gaps.

LingLang Lunch (3/9/2016): Emily Myers (University of Connecticut)

Non-Native Speech Sound Learning: Studies of Sleep, Brain, and Behavior

Speech perception is subject to critical/sensitive period effects, such that acquisition of non-native (L2) speech sounds is far more difficult in adulthood than in childhood. Although adults can be trained to perceive differences among speech sounds that are not part of their native language, success is (1) variable across individuals, (2) variable across specific sounds to be learned, and (3) training may or may not generalize to untrained instances. Any theory of L2 speech perception must explain these three phenomena. Accounts of the L2 speech learning process have drawn from traditions in linguistics, psychology, and neuroscience, yet a full description of the barriers to perceptual learning of L2 sounds remains elusive. New evidence from our lab suggests that training on non-native speech produces plastic effects in the brain regions involved in native-language perception, and that consolidation during sleep plays a large role in the degree to which training is maintained and generalizes to new talkers. Further, similar mechanisms may be at play when listeners learn to perceive non-standard tokens in the context of accented speech. Taken together, these findings suggest that speech perception is more plastic than critical period accounts would predict and that individual variability in brain structure and sleep behavior may predict some of the variability in ultimate L2 sound acquisition success.

LingLang Lunch (4/29/2016): Florian Jaeger (University of Rochester)

From processing to language change and cross-linguistic distributions

I’ll present recent attempts to contribute to a wee little question in linguistics: the role of ‘language use’ in language change and, as a consequence, the cross-linguistic distribution of linguistic properties. Specifically, I focus on the extent to which communicate and processing biases shape language. I hope to demonstrate how advances in computational psycholinguistics can contribute to this question: advances in our empirical and theoretical understanding of the biases operating during language production/understanding allows more predictive and principled notions of language use; advances in the empirical methods allow us to more directly test hypotheses about not only whether, but also how these biases come to shape aspects of grammar.

I’ll present a medley of case studies on this question, which hopefully will make for some interesting discussion. I’ll begin with a computational study on the syntax of five languages: do the grammars of these languages order information in such a way that makes the language easier to process than expected by chance (Gildea & Jaeger, 2015)? I then present work on miniature artificial language learning to show that the biases we observe in the first study operate during language acquisition, and that they are strong enough to bias learners to deviate from the input language towards languages that are easier to process and encode information more efficiently (Fedzechkina, Jaeger, & Newport, 2012; Fedzechkina, Newport, & Jaeger, 2016; Fedzechkina & Jaeger, under review). Time permitting, I’ll also show how related biases might cause change within a speaker’s production through that speaker’s life time (suggesting a second path through which language processing can affect language change, Buz, Tanenhaus, & Jaeger, 2016). Alternatively, I can show how adaptive processes during language understanding continuously reshape our linguistic representations throughout our life (Fine, Jaeger, Farmer & Qian, 2013; Kleinschmidt & Jaeger, 2015), including the acquisition of new (e.g., dialectal) syntax (Fraundorf & Jaeger, under review). Come prepared to vote (and to be over voted).

LingLang Lunch (10/19/2016): Matt Masapollo (Brown University)

On the nature of the natural referent vowel bias

Considerable research on cross-language speech perception has shown that that perceivers (both adult and infant) are universally biased toward the extremes of articulatory/acoustic vowel space (peripheral in F1/F2 vowel space; Polka & Bohn, 2003, 2011). Much of the evidence for this bias comes from studies showing that perceivers consistently discriminate vowels in an asymmetric manner. More precisely, perceivers perform better at detecting a change from a relatively less (e.g., /e/) to a relatively more peripheral vowel (e.g., /i/), compared to the same change presented in the reverse direction. Although the existence of this perceptual phenomenon (i.e., the natural referent vowel [NRV] bias) is well established, the processes that underlie it remain poorly understood. One account of the NRV bias, which derives from the Dispersion–Focalization Theory (Schwartz et al., 2005), is that extreme vocalic articulations give rise to acoustic vowel signals that exhibit increased spectral salience due to formant frequency convergence, or “focalization.” In this talk, I will present a series of experiments aimed at assessing whether adult perceivers are indeed sensitive to differences in formant proximity while discriminating vowel stimuli that fall within a given category, and, if so, whether that sensitivity is attributable to general properties of auditory processing, or to phonetic processes that extract articulatory information available across sensory modalities. In Experiment 1, English- and French-speaking perceivers showed directional asymmetries consistent with the focalization account as they attempted to discriminate synthetic /u/ variants that systematically differed in their peripherality and hence degree of formant proximity (between F1 and F2). In Experiment 2, similar directional effects were found when English- and French-speaking perceivers attempted to discriminate natural /u/ productions that differed in their articulatory peripherality when only acoustic-phonetic or only visual-phonetic information was present. Experiment 3 investigated whether and how the integration of acoustic and visual speech cues influences the effects documented in Experiment 2. When acoustic and visual cues were phonetically-congruent, an NRV bias was observed. In contrast, when acoustic and visual cues were phonetically-incongruent, this bias was disrupted, confirming that both sensory channels shape this bias in bimodal auditory-visual vowel perception. Collectively, these findings suggest that perceivers are universally biased to attend to extreme vocalic gestures specified optically, in terms of articulatory kinematic patterns, as well as acoustically, in terms of formant convergence patterns. A complete understanding of this bias is not only important to speech perception theories, but provides a critical basis for the study of phonetic development as well as the perceptual factors that may constrain vowel inventories across languages.

Colloquium (11/2/2016): Valentine Hacquard (University of Maryland)

Grasping at Factivity

Speakers mean more than their sentences do, because they can take a lot about their audience for granted. This talk explores how presuppositions and pragmatic enrichments play out in acquisition. How do children untangle semantic from pragmatic contributions to what speakers mean? The case study I will focus on is how children learn the meaning of the words think and know. When and how do children figure out that think but not know can be used to report false beliefs? When and how do they figure out that with know, but not think, speakers tend to presuppose the truth of the complement clause? I will suggest that the path of acquisition is traced by the child’s understanding both of where such verbs occur, and of why speakers use them. (joint work with Rachel Dudley and Jeff Lidz)

Colloquium (05/02/2018): Sandra Waxman (Northwestern University)

Sandra Waxman is interested in infants’ and young children’s concepts, words, and reasoning across cultures and across language. Her research focuses on the development of language within infants starting from a very young age and acquiring different languages, the development of reasoning and epistemology of the natural world of infants with various cultural backgrounds, and the development of language and the development of reasoning and concepts interact. For more information, her website is here.
 

Becoming human: How (and how early) do infants link language and cognition?

Language is a signature of our species. To acquire a language, infants must identify which signals are part of their language and discover how these are linked to the objects and events they encounter and to their core representations. For infants as young as 3 months of age, listening to human vocalizations promotes the formation of object categories, a fundamental cognitive capacity. Moreover, this precocious link emerges from a broader template that initially encompasses vocalizations of human and non-human primates but is rapidly tuned specifically to human vocalizations. In this talk, I’ll focus on the powerful contributions of both ‘nature’ and ‘nurture’ as infants discover increasingly precise links between language and cognition and use them to learn about their world. I’ll also tie in ideas about the place of this language-cognition link in considerations of cognitive and developmental science.