(CLPS) Scott AnderBois:
Scott AnderBois earned his PhD from the Dept. of Linguistics at UC Santa Cruz in 2011, joining the Brown CLPS faculty in 2013. His research explores issues in semantics, pragmatics, and their interfaces through a variety of methods including primary fieldwork, with a particular focus on Yucatec Maya and other Mayan and Austronesian languages. AnderBois’s research applies a broadly dynamic perspective to understanding the ways in which sentence meanings interact with the discourses in which they are uttered as well as the specific contributions of various morphemes, words, and sentence structures forms to these interactions.
Link: Personal Site
(CLPS) Sheila Blumstein (emerita):
Blumstein’s research is concerned with understanding the processes and mechanisms involved in speaking and understanding and delineating their neural basis. Her research has focused on how the continuous acoustic signal is transformed by perceptual and neural mechanisms into the sound structure of language, how the sound structure of language maps to the lexicon (mental dictionary), and how the mental dictionary is organized for the purposes of language comprehension and production. Her lab uses a number of methodologies including behavioral measures with young adults, functional neuroimaging of young and older adults, and behavioral measures of aphasic patients correlated with structural measures of neuropathology.
Link: The Brown Speech Lab
(CLPS) Uriel Cohen Priva
One of the fascinating aspects of language is the interaction of multiple constraints of different nature: physiological, cognitive, communicative and social pressures shape human language. Speakers may wish to ease the difficulty of articulating some word, but still need to make themselves understood by others. Change processes that are motivated by functional considerations may adversely affect other functional needs: word-final /t/ deletion in English may be motivated by the redundancy of /t/ in English, but when speakers elide the word-final /t/ in can’t, they fail to make themselves understood. My research focuses on the interaction between those multiple pressures. In particular, I study how perceptual and articulatory pressures interact with the amount of information linguistic units carry.
Link: Personal Site
(CLPS) Roman Feiman:
Roman Feiman received his PhD in Psychology from Harvard University in 2015. He completed his postdoctoral work at Harvard and UC San Diego before coming to Brown. His work draws on a variety of approaches and methods from cognitive developmental psychology, language acquisition, psycholinguistics, and formal semantics. Roman directs the Brown Language and Thought Lab.
(Slavic Studies) Masako Fidler:
Masako Fidler works in two areas of linguistics: sound symbolism and quantitative text analysis. Her monograph Onomatopoeia in Czech (2014) explores how onomatopoeia as a linguistic device informs our understanding of grammar and discourse. The book was awarded the Best Book in Slavic Linguistics in 2015 by the American Association of Teachers of Slavic and East European Languages. Her research based on quantitative text data, entitled Needle-in-a-Haystack Method (NHM), is a collaborative project with the Institute of the Czech National Corpus at Charles University in Prague; the project thus far has produced outputs concerning the receptions of text in different historical periods, verbal aspect and genre, and the relationship between inflection and political discourse.
Link: Personal Site
(CLPS) Pauline Jacobson:
Pauline Jacobson’s research concerns constructing formal models of the syntax and semantics of natural language with a particular concern on how the two systems work together. Her research program is driven by the conviction that the architecture of the grammar is maximally simple: this leads to the hypothesis of ‘Direct Compositionality’. This proposes that the syntax and semantics work ‘in tandem’ (the former is a system that speakers’ unconsciously ‘know’ allowing them to predict the set of well-formed expressions in their language and the latter is the system that allows them to pair each such expression with a meaning). This modeling involves using formal tools developed within logic and applying them to subtle domains in language such as the interpretation of pronouns (which is highly variable depending on where they appear), the interpretation of elliptical constructions (constructions where material appears to be missing), and the interactions of these with each other and with things like quantification. She recently completed a graduate level textbook on compositional semantics (Compositional Semantics: An Introduction to the Syntax/Semantics Interface) published by Oxford University Press.
Link: Personal Site
(CLPS) James Morgan:
Beginning academic life as a linguist with interests in language processing and computation, I switched over in graduate school to become a (developmental) psychologist. In my youth, claims about innate bases and properties of language predominated. I am not altogether unsympathetic with that viewpoint, but it has always seemed to me that the most powerful argument for language preprogramming must be made by considering the strongest possible empirically supportable assumptions about richness of language input and the power of learners’ perceptual, representational, and analytic capacities, and then determining specific aspects of language where these fall short. I have devoted my career to exploring the nature of language input (the auditory and, more recently, visual experiences of infants) and the nature of infants’ language processing abilities. I have focused particularly on infants’ spoken word recognition – a set of complex perceptual and computational skills fundamental for language comprehension and acquisition, involving arguably the most central unit of language structure.
Link: Infant Research Lab
(CS) Ellie Pavlick:
Ellie Pavlick works on natural language processing, focusing on building cognitively-motivated representations of semantics and pragmatics that enable computers to understand and generate human language. Her lab is currently looking at questions of grounded language representation and acquisition (how can computers learn representations that connect language to the non-linguistic world?), emergence (what types of constraints on architectures and/or learning objectives give rise to compositional representations?), and the social functions of language (how is language used to signal belief and identity in social contexts and how do we differentiate connotative from denotative aspects of meaning)? Ellie leads Browns Language Understanding and Representation (LUNAR) Lab and is a member of the Brown Integrative General AI (BigAI) Project.