Tag Archives: Syntax

LingLang Lunch (9/30/2015): Matt Hall (University of Connecticut)

Using Non-Language to Understand Language

Communicative systems crucially depend on the fact that they are shared between those who send signals and those who receive them. How did this shared-ness come about? Specifically, are producers and comprehenders subject to the same sets of heuristics when creating a communication system de novo? Here, I explore these questions by asking naïve participants (hearing non-signers) to describe or simple events in pantomime, to comprehend pantomimed descriptions, or both. By initially segregating production from comprehension, we can establish a clearer foundation for understanding the (tacit or explicit) negotiations that take place during dynamic communicative interaction. I will summarize the results of several experiments on pantomime production, comprehension, and dynamic interaction, and will suggest that these findings can help us better understand the nonlinguistic origins from which grammar develops.

LingLang Lunch (10/21/2015): Polly Jacobson (Brown University)

You think there’s Silent Linguistic Material, but I don’t: Neg Raising meets Ellipsis

The first part of this talk will set the stage with material that some but hopefully not all of the LingLangLunchers have heard. This background part is my ‘take’ on the existence of so‐called Silent Linguistic Material (SLM) in so‐called ‘ellipsis’ constructions (the relevant one here is VP Ellipsis). The issue of whether or not there is “SLM” is illustrated by the following question: since the sentence in (1a) can easily be understood as (1b) (and, without additional context, this is pretty much the only interpretation, is (1a) actually at some level the same as (1), where ski that course in 4 minutes is deleted or silenced?

(1) a. Bode can ski that course in 4 minutes, and Lindsay can too.
     b. Bode can ski that course in 4 minutes and Linday can ski that course in 4
          minutes too.

There is a wealth of literature going back decades arguing that this is so, and within the SLM approach there are two main competing hypotheses: (a) that ski that course in 4 minutes in (a) is silenced on the basis of formal identity with the VP in the first conjunct, or (b) that it is silenced on the basis of semantic identity with (the meaning of) the first VP. I begin this talk with reasons to doubt the conventional wisdom (in either of its incarnations); there is particularly strong evidence against the formal identity view. I will also (depending on the time) answer some of the traditional arguments for the SLM view, particularly a couple based on how (b) is understood (which is a very old argument) and on new arguments based on processing considerations.

I then turn to new material here (tentative and in progress) centering on the interaction of Neg Raising and VP Ellipsis. Neg Raising is the phenomenon by which (2a) is easily understood as (2b) where the not is in the lower clause:

(2) a. Bernie doesn’t think we should be talking about the e‐mails.
      b. Bernie thinks we shouldn’t be talking about the e‐mails.

One view is that there is a syntactic process moving a negation from lower to higher clause. The alternative view is that the negation in (2) semantically is in the higher clause, and there is a pragmatic strengthening. I will be concerned with cases like (3) (and more elaborated versions):

(3) Bernie doesn’t think we should be talking about the e‐mails, and neither does
     Hillary.

The full argument requires more elaborated examples, but the bottom line will be that if there is syntactic Neg Raising, then the conditions for SLM must be formal identity. But there is good reason to reject that view. And so, turning this around: assuming there is no SLM (especially no SLM sanctioned by formal identity) then there cannot be Neg Raising, and some version of the pragmatic strengthening story must be correct.

(NOTE: This is in preparation for an upcoming talk at a workshop honoring Laurence Horn; he has done extensive work on Neg Raising, arguing against the syntactic solution.)

LingLang Lunch (4/6/2016): Matthew Barros (Yale University)

Sluicing and Ellipsis Identity

This talk focuses on sluicing constructions, the ellipsis of TP in a Wh-question leaving a Wh-phrase “remnant” overt. Sluicing is subject to an identity condition that must hold between the sluiced question and its antecedent. There is currently no consensus on whether this condition should be characterized as syntactic or semantic in nature, or whether a hybrid condition that makes reference to both semantic and syntactic identity is needed (Merchant 2005, Chung 2013, Barker 2013). I provide a new identity condition that captures extant syntactic generalizations while allowing for enough wiggle room to let in detectible mismatches between the antecedent and sluice. The new identity condition also lets in “pseudosluices” alongside isomorphic sluices, where the sluiced question is a cleft or a copular question while the antecedent is not. Pseudosluicing has often been proposed as a last resort mechanism, only available when an isomorphic structure is independently ruled out (Rodrigues et al. 2009, Vicente 2008, van Craenenbroeck 2010). I defend a view where pseudosluicing is not a special case of sluicing, so that the identity condition should not distinguish between copular and non-copular clauses in the determination of identity. The new Identity condition achieves this in making no reference to the syntactic content of the ellipsis site.

LingLang Lunch (4/29/2016): Florian Jaeger (University of Rochester)

From processing to language change and cross-linguistic distributions

I’ll present recent attempts to contribute to a wee little question in linguistics: the role of ‘language use’ in language change and, as a consequence, the cross-linguistic distribution of linguistic properties. Specifically, I focus on the extent to which communicate and processing biases shape language. I hope to demonstrate how advances in computational psycholinguistics can contribute to this question: advances in our empirical and theoretical understanding of the biases operating during language production/understanding allows more predictive and principled notions of language use; advances in the empirical methods allow us to more directly test hypotheses about not only whether, but also how these biases come to shape aspects of grammar.

I’ll present a medley of case studies on this question, which hopefully will make for some interesting discussion. I’ll begin with a computational study on the syntax of five languages: do the grammars of these languages order information in such a way that makes the language easier to process than expected by chance (Gildea & Jaeger, 2015)? I then present work on miniature artificial language learning to show that the biases we observe in the first study operate during language acquisition, and that they are strong enough to bias learners to deviate from the input language towards languages that are easier to process and encode information more efficiently (Fedzechkina, Jaeger, & Newport, 2012; Fedzechkina, Newport, & Jaeger, 2016; Fedzechkina & Jaeger, under review). Time permitting, I’ll also show how related biases might cause change within a speaker’s production through that speaker’s life time (suggesting a second path through which language processing can affect language change, Buz, Tanenhaus, & Jaeger, 2016). Alternatively, I can show how adaptive processes during language understanding continuously reshape our linguistic representations throughout our life (Fine, Jaeger, Farmer & Qian, 2013; Kleinschmidt & Jaeger, 2015), including the acquisition of new (e.g., dialectal) syntax (Fraundorf & Jaeger, under review). Come prepared to vote (and to be over voted).

LingLang Lunch (11/10/2017): Hadas Kotek (New York University)

Hadas Kotek’s main research focus is on generative syntax and its interaction with formal semantics. Her recent research topics include the syntax and semantics of wh-questions, Association with Focus, relative clauses, ellipsis, and comparative and superlative quantifiers. For more information, her website is here.
 

Which QuD (joint work with Matt Barros)

Sluicing is ellipsis in a question, leaving only a wh-phrase overt (Ross 1969), e.g.: Sally called someone, but I don’t know who. Recent work on the identity conditions underlying the licensing of sluicing has converged on the need for a semantic approach to ellipsis licensing, where the sluiced question must be congruent to a Question under Discussion (QuD, Roberts 1996; e.g. Ginzburg and Sag, 2000; AnderBois, 2011; Weir, 2014; Barros, 2014; Kotek and Barros, To appear). In this talk, we address problems of over-generation predicted by this account, stemming from a more general concern: what is the source of QuDs, and how are they constrained?.

Adopting the notion of strategy trees and super/sub-questions from Rojas-Esponda (2014) (cf Büring 2003, Roberts 2012), we propose that sluicing is licensed by the most recently raised QuD in the discourse, and not by its super- or sub-questions, nor by unrelated QuDs. We show how this proposal accounts for several test-cases that are problematic for traditional approaches, including cases of sprouting (1), Dayal and Schwarzschild’s (2010) Antecedent Correlate Harmony generalization (2-3), and contrast sluicing (4). This approach furthermore provides a natural explanation for Barker’s (2014) ‘Answer Ban:’ the observation that the antecedent clause must not resolve, or even partially resolve, the issue raised by the sluiced interrogative.

(1) Sally left, but I don’t know {why, when, in which car, with whom, …}
(2) Joan ate a donut.
a. * Fred doesn’t know what.
b.    Fred doesn’t know which donut.
(3) Joan ate something.
a.    Fred doesn’t know what.
b. * Fred doesn’t know which donut.
(4) A:    Did(n’t) Mary call [Jack]F?
B: # I don’t know who.

LingLang Lunch (9/12/2012): Polly Jacobson (Brown University)

Polly Jacobson’s research is mainly concerned with constructing formal models of the semantics and syntax of natural language, particularly on the way that the syntax and the semantics interact. Her work is carried out within the tradition of model-theoretic (“formal”) semantics, combined with a Categorial Grammar syntax. For more information, her website is here.
 

You think there’s Silent Linguistic Material, but I don’t: Neg Raising meets Ellipsis

The first part of this talk will set the stage with material that some but hopefully not all of the LingLangLunchers have heard. This background part is my ‘take’ on the existence of so-called Silent Linguistic Material (SLM) in so-called ‘ellipsis’ constructions (the relevant one here is VP Ellipsis). The issue of whether or not there is “SLM” is illustrated by the following question: since the sentence in (1a) can easily be understood as (1b) (and, without additional context, this is pretty much the only interpretation, is (1a) actually at some level the same as (1), where ski that course in 4 minutes is deleted or silenced?

    • (a) Bode can ski that course in 4 minutes, and Lindsay can too.

      (b) Bode can ski that course in 4 minutes and Linday can ski that course in 4 minutes too.

There is a wealth of literature going back decades arguing that this is so, and within the SLM approach there are two main competing hypotheses: (a) that ski that course in 4 minutes in (a) is silenced on the basis of formal identity with the VP in the first conjunct, or (b) that it is silenced on the basis of semantic identity with (the meaning of) the first VP. I begin this talk with reasons to doubt the conventional wisdom (in either of its incarnations); there is particularly strong evidence against the formal identity view. I will also (depending on the time) answer some of the traditional arguments for the SLM view, particularly a couple based on how (b) is understood (which is a very old argument) and on new arguments based on processing considerations.

I then turn to new material here (tentative and in progress) centering on the interaction of Neg Raising and VP Ellipsis. Neg Raising is the phenomenon by which (2a) is easily understood as (2b) where the not is in the lower clause:

    • (a) Bernie doesn’t think we should be talking about the e-mails.

      (b) Bernie thinks we shouldn’t be talking about the e-mails.

One view is that there is a syntactic process moving a negation from lower to higher clause. The alternative view is that the negation in (2) semantically is in the higher clause, and there is a pragmatic strengthening. I will be concerned with cases like (3) (and more elaborated versions):

  1. Bernie doesn’t think we should be talking about the e-mails, and neither does Hillary.

The full argument requires more elaborated examples, but the bottom line will be that if there is syntactic Neg Raising, then the conditions for SLM must be formal identity. But there is good reason to reject that view. And so, turning this around: assuming there is no SLM (especially no SLM sanctioned by formal identity) then there cannot be Neg Raising, and some version of the pragmatic strengthening story must be correct.

NOTE: This is in preparation for an upcoming talk at a workshop honoring Laurence Horn; he has done extensive work on Neg Raising, arguing against the syntactic solution.

Colloquium (9/16/2015): Edward Gibson (MIT)

Edward Gibson’s “TedLab” investigates the relationship between culture and cognition; how people learn, represent and process language; and how all these affects the structure of human languages. For more information, his website is here.

Information theoretic approaches to language universals

Finding explanations for the observed variation in human languages is the primary goal of linguistics, and promises to shed light on the nature of human cognition. One particularly attractive set of explanations is functional in nature, holding that language universals are grounded in the known properties of human information processing. The idea is that grammars of languages have evolved so that language users can communicate using sentences that are relatively easy to produce and comprehend.  In this talk, I summarize results from explorations into several linguistic domains, from an information-processing point of view.

First, we show that all the world’s languages that we can currently analyze minimize syntactic dependency lengths to some degree, as would be expected under information processing considerations.  Next, we consider communication-based origins of lexicons and grammars of human languages.  Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity.  Contrary to Chomsky, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature. Furthermore, word lengths are optimized on average according to predictability in context, as would be expected under and information theoretic analysis.  Then we show that language comprehension appears to function as a noisy channel process, in line with communication theory.  Given si, the intended sentence, and sp, the perceived sentence we propose that people maximize P(si | sp), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp).  We discuss how thinking of language as communication in this way can explain aspects of the origin of word order, most notably that most human languages are SOV with case-marking, or SVO without case-marking.