Monthly Archives: August 2018

Colloquium (10/17/2012): Eugene Charniak (Brown University)

Bayes’ Law as Psychologically Real

Since the brain manipulates probabilities (I will argue) then it should do so according to Bayes’ Law. After all, it is normative, and Darwin would not expect us to do anything less. Furthermore there is a lot to be learned from taking Bayes seriously. I consider myself a nativist despite my statistical bent, and it tells me how to combine an informative prior with the evidence of our senses – compute the likelihood of the evidence. It then tells us that this likelihood must be a very broad generative model of everything we encounter. Lastly, since Bayes says nothing about how to do any of this, I presume that the computational methods themselves are not learned, they must be innate, and I will argue there seems to be very few options on how this can be done, with something like particle filtering being one of the few. I will illustrate these ideas with work in computational linguistics, both my own and that of others.

LingLang Lunch (9/25/2012): Geoffrey K. Pullum (University of Edinburgh)

Psychology and the Claimed Infinitude of Sentences

Some linguists take it to be an established universal that human languages have infinitely many sentences. I explore their reasons for believing this. I argue that no evidence could support or refute the infinitude claim; no convincing argument has been advanced for its truth; no consequences would follow from it; it might not be universally true anyway; and there are no significant consequences for psychology if that is the case. I focus especially on the supposed link between the infinitude claim and “creative” human cognitive abilities such as being able to come up with new utterances that are appropriate to context.

LingLang Lunch (10/10/2012): Junwen Lee (Brown University)

A Unitary Analysis of Colloquial Singapore English Lah

The linguistic function of the Colloquial Singapore English (CSE) particle lah has been characterized variously as a marker to convey solidarity, warmth and informality; an attenuation or emphasis marker; an assertion marker; and an accommodation marker. As the particle can be pronounced using several pitch contours, the particle has generally been analyzed as either a set of homonymic variants that are distinguished by pitch and function, or a unitary particle that has the same meaning despite tonal differences. However, I argue against both approaches – the former conflates pragmatic function and semantic meaning, while the latter ignores the systematic differences in function that correlate with tonal differences. Instead, using a relevance-theoretic framework, I propose that the different pragmatic functions of lah result from the interaction between its unitary semantic meaning and the effect of pitch as signals of modality, specifically a falling tone that marks declaratives/imperatives and a rising tone that marks interrogatives. The advantages of this approach are also discussed in relation to another CSE particle hor, which similarly differs in pragmatic function depending on whether it is pronounced with a falling or rising tone.

LingLang Lunch (10/31/2012): Peter Graff (MIT)

Communicative Efficiency in the Lexicon

Some of the earliest as well as some of the most recent work on the role of communicative efficiency in natural language examined the patterning of word-length in the lexicon (Zipf 1949; Piantadosi et al. 2011). Frequent and predictable words tend to be phonologically shorter, while their infrequent and unpredictable counterparts tend to be longer, thus relativizing the articulatory effort invested by the speaker to the probability of her being misunderstood. In this talk, I show that it is not only word-length but also the actual phonological composition of words that facilitates the successful communication of intended messages. I show that the English lexicon is probabilistically organized such that the number of words that rely exclusively on a given contrast for distinctness follows from that contrast’s perceptibility (cf. Miller and Nicely 1955) beyond what is expected from the occurrence frequencies of the contrasting sounds. For example, there are more minimal pairs like pop:shop, which rely on the highly perceptible /p/:/ʃ/ opposition in the English lexicon than expected from the frequencies of /p/ and /ʃ/. Conversely, there are fewer minimal pairs like fought:thought, which rely on the confusable /f/:/θ/ contrast, than expected from the frequencies of /f/ and /θ/. Redundancy in the phonological code is thus not randomly distributed, but exists to supplement imperceptible distinctions between meaningful linguistic units as needed. I also show that English is not unique in this respect: across 60 languages, the perceptibility of a given contrast predicts the extent to which words in the lexicon rely on that contrast for distinctness. I argue that these patterns arise from the fact that speakers choose among words in ways that accommodate anticipated mistransmission (Mahowald et al. to appear) and present computational evidence in favor of the hypothesis that the global optimization of the phonological lexicon could have arisen from the aggregate effects of such word choices over the course of a language’s history (cf. Martin 2007).

Colloquium (11/1/2012): Terry Au (University of Hong Kong)

Access to Childhood Language Memory

All adults seem to have amnesia about much that happened in their childhood. Does early memory simply wither away through massive synaptic pruning and cell death in early brain development? Or, is it just masked by interference from later experience? This talk explores these questions in the specific case of childhood language memory. Research into the re-learning of long-disused childhood languages turns out to have much to offer. It provides relatively objective evidence for access to early childhood memory in adulthood via re-learning. It complements linguistic deprivation research to highlight the special status of childhood language experience in phonology and morphosyntax acquisition. It thereby suggests a strategy to salvage seemingly forgotten childhood languages, which are often also heritage languages. Equally importantly, re-learning childhood languages may well open a window onto how language affects cognitive development not only during, but also well beyond, the childhood years.

LingLang Lunch (11/14/2012): Brian Dillon (University of Massachusetts Amherst)

Syntactic complexity across the at-issue / not-at-issue divide

Much work in psycholinguistics has been dedicated to uncovering the source of complexity effects in syntactic processing (Chomsky & Miller 1963; Gibson, 1998; Levy, 2007; Lewis, 1996; Lewis & Vasishth, 2005; Yngve, 1960; i.a.). There are many theoretical accounts of syntactic complexity effects, starting from Chomsky and Miller’s (1963) observations on the difficulty of self-embedding, to the introduction of new discourse referents while simultaneously maintaining syntactic predictions (Gibson, 1998), among many others. One recent and influential model attempts to reduce syntactic complexity to interference effects related to memory retrieval (Lewis & Vasishth, 2005). In the present talk I present joint work with Lyn Frazier and Chuck Clifton that investigates the source of syntactic complexity by looking how the at-issue / not-at-issue distinction relates to syntactic complexity effects. Not-at-issue content like appositives and parentheticals do not directly contribute to the truth conditions of a sentence, and so have been argued to form a separate ‘dimension’ of meaning (Potts, 2005). In a series of judgment experiments, it is seen that syntactic complexity in the not-at-issue dimension does not lead to complexity effects in offline judgments, while complexity in at-issue content does. I then present eye-tracking data that helps to locate the source of the complexity effects in online comprehension. The results provide initial evidence that i) the parser distinguishes at-issue and not-at-issue content, and ii) the complexity effects observed in the present data cannot be reduced to retrieval interference. I suggest that at-issue / not-at-issue distinction is used to structure parsing routines by maintaining distinct stacks for different types of linguistic content, thereby minimizing complexity for the sentence as a whole.

LingLang Lunch (5/8/2013): Kathryn Davidson (University of Connecticut)

What can sign languages tell us about the semantic/pragmatic interface?

As adult language users, we are all aware that sometimes we mean exactly what we say, and sometimes we mean a lot more. Understanding precisely how language meaning arises from the complex interplay of semantics (what we say) and pragmatics (what we mean) is a difficult question. In this talk, I will focus on two phenomena at the semantic/pragmatic interface: scalar implicatures and the restriction of quantifier domains, from the point of view of American Sign Language (ASL), gaining new insights into the relationship of semantics and pragmatics based on the behavior of ASL. In the case of scalar implicatures, ASL makes frequent use of general use coordinators instead of separate lexical items “and” and “or,” which I show leads to strikingly fewer exclusive interpretations of disjunction than a lexically contrasting scale like English . In the case of quantifier domains, the gradient use of vertical space in ASL can provide clearer judgments about domains for quantification than the gradient options available in spoken languages, such as intonation. In both cases, I show how the manual/visual language modality allows linguists, philosophers, and psychologists to test important issues concerning the relationship of semantics and pragmatics in natural languages.

LingLang Lunch (9/18/2013): Eva Wittenberg (Tufts University)

Close but no cigar: The differences between kissing, giving kisses, and giving other things

Light verb constructions, such as “Julius is giving Ellie a kiss”, create a mismatch at the syntax-semantics interface. Typically, each argument in a sentence corresponds to one semantic role, such as in “Julius is giving Ellie a present”, where Julius is the Source, Ellie the Goal, and the present the Theme. However, a light verb construction such as “Julius is giving Ellie a kiss” with three arguments describes the same event as the transitive “Julius kissed Ellie” with two arguments: Julius is the Agent, and Ellie the Patient.
This leads to several questions: First, how are light verb constructions such as “giving a kiss” processed differently from sentences such as “giving a present” ? Second, at which structural level of representation would we find sources of this difference? Third, what is the effect of using a light verb construction such as “giving a kiss” as opposed to “kissing” on the event representation created in a listener? I will present data from an ERP study, an eye-tracking study, and several behavioral studies to answer these questions.

LingLang Lunch (10/2/2013): Josh Hartshorne (MIT)

Syntax, Semantics, World Knowledge, and Reference

Consider these examples from Winograd (1972):

(1) The city council denied the protesters a permit because they feared violence.
(2) The city council denied the protesters a permit because they advocated violence.

Most people reliably attribute different interpretations to they in (1-2), though in principle in each case the pronoun could refer to the city council, the protesters, or someone else. Levesque (2012) has argued that solving such sentences draws on such a wide range of cognitive abilities that it is an even stronger test of human intelligence than the original Turing Test.

Psycholinguists, too, have been interested in ambiguous pronouns. In 1974, Garvey and Caramazza demonstrated that people have strong expectations about the meanings of pronouns even without having heard the potentially critical end of the sentence:

(3) The city council denied the protesters a permit because they…
(4) Sally frightened Mary because she…
(5) Alfred liked Bernard because he…

These intuitions can be modified by such a bewildering range of contextual manipulations that here, too, many commentators resorted to attributing pronoun reference to inference over ill-specified concepts such as “event structure” (Pickering & Majid, 2007) or “salience” (Song & Fisher, 2004).

In this talk, while I concede that pronoun reference is very difficult and that, in the limit, it requires a broad swath of cognition, we nonetheless are already in a position to say quite a lot about it. Much of the complexity of the phenomena reduce to the interactions of a small number of abstract structures in semantics and discourse. I demonstrate this with a combination of experiments and computational modeling.

LingLang Lunch (10/15/2013): Scott AnderBois (Brown University)

A transitivity-based split in Yucatec Maya control complements (joint work with Grant Armstrong, University of Wisconsin)

In a wide variety of environments (e.g. counterfactual antecedents, optatives, different subject irrealis complements), Yucatec Maya (YM) has both transitive and intransitive verb forms which have traditionally been labeled ‘subjunctive’. Semantically, we expect to find such complements in complements to control predicates such as ‘want’ and ‘try’. What we find, however, is that this expectation is met only for complements which are syntactically transitive (e.g. ‘I want to eat it.’), but not for those which are intransitive (e.g. ‘I want to eat’). The transitive complements include subjunctive verb forms as well as showing agreement by both the object and the control subject and is therefore an instance of so-called `copy control’. Intransitive control complements, however, show neither agreement marker and no subjunctive verb form, with the verb instead appearing as a bare stem in citation form.

In this talk, we propose an account of this split based on independently observable properties of agreement in YM together with the Movement Theory of Control (Hornstein 1999, Hornstein and Polinsky 2010 inter alia). First, we develop a clausal syntax for a variety of YM clauses in which absolutive arguments, including intransitive subjunctive subjects, remain low in the clause. Second, we show that this independently motivated syntax together with a particular approach to control predicts the ungrammaticality of intransitive subjunctive control complements. Finally, we argue that the attested bare forms are in fact nominalizations and therefore have a quite different syntax than the transitives.