Category Archives: People

Welcome Roman Feiman!!!!!!

The Department of Cognitive, Linguistic and Psychological Sciences (CLPS) is delighted to welcome our new psycholinguist Roman Feiman who joins us as of September, 2018 as an Assistant Professor.  Roman received his PhD in Psychology from Harvard University in 2015. He was a postdoctoral fellow at Harvard for a year, and at UC San Diego for another two. His work draws on a variety of approaches and methods from cognitive developmental psychology, language acquisition, psycholinguistics, and formal semantics. Now at Brown, he directs the brand new Brown Language and Thought lab. You can find the lab here:

Over the next few years Roman will be teaching – among other things – courses on language processing (CLPS 1800), on child language acquisition of syntax, semantics and pragmatics (CLPS 1660), a seminar on Logic in Language and Thought, and co-teaching with Ellie Pavlich a course on Machine and Human Learning. Stay tuned for other courses.  Welcome Roman!

NSF Grant to Cohen Priva

Uriel Cohen Priva has been awarded a grant from NSF. Read about it here.

Human language use reflects the nature of human communication. For instance, frequent words tend to have fewer sounds than infrequent ones, which facilitates quick production and understanding. However, little is known about more fine-grained distinctions. For instance, English has more /k/ than /p/ sounds. Does that reflect a property of human language and its physiological and perceptual nature or a historical accident? Answering such questions requires comparative data on the frequency and phonological makeup of words in many languages. This project will build on existing textual sources and word frequency lists to provide the phonological makeup of words in close to 200 low-resource languages. The phonological word lists will provide an invaluable resource to the understanding of human language and provide much-needed linguistic resources to low-resource languages. The outputs of the project will be made public and easily accessible, thereby assisting in documenting and teaching the processed languages, and in building computational linguistic resources such as text-to-speech engines.

The research team, including trained undergraduate and graduate students, will create rules to translate alphabets to phonemic representation for multiple languages. The team will then collect textual resources and word frequency lists from publicly available sources such as online Bibles, newspapers, and movie subtitles. The rules will be applied separately to each source and the resulting phonological representations will be made publicly available, such that not only researchers but also the general public will be able to use and interact with the data. The researchers will proceed to use the data to investigate whether the information theoretic properties of sounds have distributional universality: do sounds tend to provide similar amounts of information cross-linguistically, and if so, does their information content correlate with their phonetic properties? Universality is an age-old question, and the similarities and differences of properties across language can provide new insights into language use. Specifically, the researchers will use information theoretic properties to predict whether low information or other previously studied phonological properties are likely to promote consonant weakening in those languages.

This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.


New paper published by Luchkina et al.: Eighteen‐month‐olds selectively generalize words from accurate speakers to novel contexts (Dev Sci 2018;e12663)

Congratulations to Elena, Dave, and Jim for a new paper out in Developmental Science! The title and abstract are as follows:

Eighteen‐month‐olds selectively generalize words from accurate speakers to novel contexts.

The present studies examine whether and how 18‐month‐olds use informants’ accuracy to acquire novel labels for novel objects and generalize them to a new context. In Experiment 1, two speakers made statements about the labels of familiar objects. One used accurate labels and the other used inaccurate labels. One of these speakers then introduced novel labels for two novel objects. At test, toddlers saw those two novel objects and heard an unfamiliar voice say one of the labels provided by the speaker. Only toddlers who had heard the novel labels introduced by the accurate speaker looked at the appropriate novel object above chance. Experiment 2 explored possible mechanisms underlying this difference in generalization. Rather than making statements about familiar objects’ labels, both speakers asked questions about the objects’ labels, with one speaker using accurate labels and the other using inaccurate labels. Toddlers’ generalization of novel labels for novel objects was at chance for both speakers, suggesting that toddlers do not simply associate hearing the accurate label with the reliability of the speaker. We discuss these results in terms of potential mechanisms by which children learn and generalize novel labels across contexts from speaker reliability.

The full paper can be found here. In addition, more information about Elena can be found on her professional website

New paper published by Masapollo et al.: Articulatory peripherality modulates relative attention to the mouth during visual vowel discrimination (J Acoust Soc Am. 141(5): 4037)

Congratulations to Matt, Lauren, and Jim for a paper published recently in the Journal of the Acoustic Society of America! The title and abstract are as follows:

Articulatory peripherality modulates relative attention to the mouth during visual vowel discrimination.

Masapollo, Polka, and Ménard (2016) have recently reported that adults from different language backgrounds show robust directional asymmetries in unimodal visual-only vowel discrimination: a change in mouth-shape from one associated with a relatively less peripheral vowel to one associated with a relatively more peripheral vowel (in F1-F2 articulatory/acoustic vowel space) results in significantly better performance than a change in the reverse direction. In the present study, we used eye-tracking methodology to examine the gaze behavior of English-speaking subjects while they performed Masapollo et al.’s visual vowel discrimination task. We successfully replicated this directional effect using Masapollo et al.’s visual stimulus materials, and found that subjects deployed selective attention to the oral region compared to the ocular region of the model speaker’s face. In addition, gaze fixations to the mouth were found to increase while subjects viewed the more peripheral vocalic articulations compared to the less peripheral articulations, perhaps due to their larger, more extreme oral-facial kinematic patterns. This bias in subjects’ pattern of gaze behavior may contribute to asymmetries in visual vowel perception.

The full paper can be found here.

New paper published by Cohen Priva et al.: Converging to the baseline: Corpus evidence for convergence in speech rate to interlocutor’s baseline. (J Acoust Soc Am. 141(5): 2989)

Congratulations to Uriel, Emily, and Lee for a paper published recently in the Journal of the Acoustic Society of America! The title and abstract are as follows:

Converging to the baseline: Corpus evidence for convergence in speech rate to interlocutor’s baseline.

Speakers have been shown to alter their speech to resemble that of their conversational partner. Do speakers converge with their interlocutor’s baseline, or does convergence stem from conversational properties that similarly affect both parties? Using the Switchboard corpus, this paper shows evidence for speakers’ convergence in speech rate to the other party’s baseline, not only to conversation-specific properties. Study 1 shows that the method for calculating speech rate used in this paper is powerful enough to replicate established findings. Study 2 demonstrates that speakers are mostly affected by their own behavior in other contexts, but that they also converge to their interlocutor’s baseline, established using the interlocutor’s behavior in other contexts. Study 2 also shows that speakers change their speech rate in response to the interlocutor’s characteristics: speakers speak more slowly with older speakers regardless of the interlocutor’s speech rate, and male speakers speak faster with other male speakers.

The full paper can be found here.