Colloquium (9/16/2015): Edward Gibson (MIT)
First, we show that all the world’s languages that we can currently analyze minimize syntactic dependency lengths to some degree, as would be expected under information processing considerations. Next, we consider communication-based origins of lexicons and grammars of human languages. Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity. Contrary to Chomsky, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature. Furthermore, word lengths are optimized on average according to predictability in context, as would be expected under and information theoretic analysis. Then we show that language comprehension appears to function as a noisy channel process, in line with communication theory. Given si, the intended sentence, and sp, the perceived sentence we propose that people maximize P(si | sp ), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp ). We discuss how thinking of language as communication in this way can explain aspects of the origin of word order, most notably that most human languages are SOV with case-marking, or SVO without case-marking.
LingLang Lunch (9/30/2015): Matt Hall (University of Connecticut)
LingLang Lunch (10/7/2015): Stephen Emet (Brown University)
LingLang Lunch (10/21/2015): Polly Jacobson (Brown University)
(1) a. Bode can ski that course in 4 minutes, and Lindsay can too.
b. Bode can ski that course in 4 minutes and Linday can ski that course in 4
minutes too.
There is a wealth of literature going back decades arguing that this is so, and within the SLM approach there are two main competing hypotheses: (a) that ski that course in 4 minutes in (a) is silenced on the basis of formal identity with the VP in the first conjunct, or (b) that it is silenced on the basis of semantic identity with (the meaning of) the first VP. I begin this talk with reasons to doubt the conventional wisdom (in either of its incarnations); there is particularly strong evidence against the formal identity view. I will also (depending on the time) answer some of the traditional arguments for the SLM view, particularly a couple based on how (b) is understood (which is a very old argument) and on new arguments based on processing considerations.
I then turn to new material here (tentative and in progress) centering on the interaction of Neg Raising and VP Ellipsis. Neg Raising is the phenomenon by which (2a) is easily understood as (2b) where the not is in the lower clause:
(2) a. Bernie doesn’t think we should be talking about the e‐mails.
b. Bernie thinks we shouldn’t be talking about the e‐mails.
One view is that there is a syntactic process moving a negation from lower to higher clause. The alternative view is that the negation in (2) semantically is in the higher clause, and there is a pragmatic strengthening. I will be concerned with cases like (3) (and more elaborated versions):
(3) Bernie doesn’t think we should be talking about the e‐mails, and neither does
Hillary.
The full argument requires more elaborated examples, but the bottom line will be that if there is syntactic Neg Raising, then the conditions for SLM must be formal identity. But there is good reason to reject that view. And so, turning this around: assuming there is no SLM (especially no SLM sanctioned by formal identity) then there cannot be Neg Raising, and some version of the pragmatic strengthening story must be correct.
(NOTE: This is in preparation for an upcoming talk at a workshop honoring Laurence Horn; he has done extensive work on Neg Raising, arguing against the syntactic solution.)
LingLang Lunch (11/4/2015): Matt Hall (University of Connecticut)
Colloquium (11/11/2015): Evelina Fedorenko (Mass General Hospital/Harvard Medical School)
Colloquium (11/18/2015): Robert J. Podesva (Stanford University)
Dyadic interactions between friends were recorded in a sound-attenuated environment staged like a living room. The acoustic analysis focuses on the incidence of creaky voice (using Kane et al.’s 2013 neural network model) and vowel quality (the lowering and retraction of the front lax vowels, in accordance with the California Vowel Shift). Computer vision techniques were applied to additionally quantify the magnitude of body movements (movement amplitude) and identify when speakers were smiling.
Results show that body movement and facial expression predict the realization of both linguistic variables. Creaky voice was more common in phrases where speakers moved less, in phrases where they were not smiling (for women), and in interactions where speakers reported feeling less comfortable. The front lax vowels were lower (more shifted) among women, and in phrases where speakers (regardless of sex) were smiling.
Speakers use their bodies in non-random ways to structure linguistic variation, so analysts can improve quantitative models of variation by attending to forms of embodied affect. Focusing on the body can also facilitate the development of more comprehensive social analyses of variation, many of which rely solely on correlations between linguistic practice and social category membership. I conclude by discussing the implications of an embodied view of variation for language change.