Monthly Archives: August 2018

LingLang Lunch (10/15/2013): Scott AnderBois (Brown University)

A transitivity-based split in Yucatec Maya control complements (joint work with Grant Armstrong, University of Wisconsin)

In a wide variety of environments (e.g. counterfactual antecedents, optatives, different subject irrealis complements), Yucatec Maya (YM) has both transitive and intransitive verb forms which have traditionally been labeled ‘subjunctive’. Semantically, we expect to find such complements in complements to control predicates such as ‘want’ and ‘try’. What we find, however, is that this expectation is met only for complements which are syntactically transitive (e.g. ‘I want to eat it.’), but not for those which are intransitive (e.g. ‘I want to eat’). The transitive complements include subjunctive verb forms as well as showing agreement by both the object and the control subject and is therefore an instance of so-called `copy control’. Intransitive control complements, however, show neither agreement marker and no subjunctive verb form, with the verb instead appearing as a bare stem in citation form.

In this talk, we propose an account of this split based on independently observable properties of agreement in YM together with the Movement Theory of Control (Hornstein 1999, Hornstein and Polinsky 2010 inter alia). First, we develop a clausal syntax for a variety of YM clauses in which absolutive arguments, including intransitive subjunctive subjects, remain low in the clause. Second, we show that this independently motivated syntax together with a particular approach to control predicts the ungrammaticality of intransitive subjunctive control complements. Finally, we argue that the attested bare forms are in fact nominalizations and therefore have a quite different syntax than the transitives.

LingLang Lunch (10/29/2013): Scott AnderBois (Brown University)

On the exceptional status of reportative evidentials

Evidentials are morphemes found regularly in ~25% of the world’s languages which encode the speaker’s grounds for making a particular claim, p, i.e. what sort of evidence they have that has led them to assert p. Some common types of evidential meanings include: direct visual, direct non-visual, conjectural, reportative, deduction from direct evidence of a result state, and deduction from general world knowledge. Given the common characterization of evidentials as providing the grounds for an assertion (of some sort) that p, we expect that it should be infelicitous or contradictory for a speaker who has uttered an evidential assertion, EVID(p), to deny that p is the case.

While this expectation is consistently borne out for most evidentials, we show that reportative evidentials – i.e. those which indicate that the speaker’s source is what some second or third party has told them – consistently do allow for exactly this. Whereas previous authors have proposed semantic accounts for such data, we argue that these exceptional cases are due to pragmatic perspective-shift. Such shifts are only readily possible in the case of reportatives since they introduce another perspectival agent, whereas other evidentials (even including intuitively ‘weaker’ ones like conjecturals) do not. Beyond explaining the cross-linguistic behavior of reportatives, I argue the proposal also makes correct predictions for languages like Bulgarian where a single evidential form has both reportative and inferential uses.

LingLang Lunch (11/13/2013): Kevin Ryan (Harvard University)

Prosodic weight beyond the rime

A number of phonological systems invoke weight in some form, including stress placement, poetic meter, compensatory lengthening, end-weight effects in word order, text-setting lyrics to music, and so forth. This talk focuses on two aspects of weight largely neglected in the modeling literature, namely, (1) how exactly phonological weight is computed for prosodic domains above the syllable (such as words and phrases) and (2) statistical contributions of onsets to weight in the aforementioned sorts of systems. Focusing especially on word-order variation in English, I propose a theory of “generalized weight mapping” that connects syllable weight to word and phrase weight, though not through simple addition. I also argue that the domain for weight is not rime-bound, as traditionally assumed, but benchmarked by the perceptual centers of syllables, thus incorporating (universally) certain onset effects into the weight percept.

LingLang Lunch (1/29/2014): Anna Shusterman (Wesleyan University)

Language-Thought Interactions in Development

How do language and thought influence each other during development? Drawing on the cases of spatial and numerical cognition, I will discuss recent work from my lab exploring this question. For both cases, I will show evidence of interesting language-thought correspondences that raise questions about the mechanisms through which language and cognition become linked. In the case of space, I will focus on three studies exploring the hypothesis that acquiring frame-of-reference terms (left-right, north-south) causally affects spatial representation in three different populations: English-speaking preschoolers, two cohorts of Nicaraguan Sign Language users, and Kichwa-speaking adults outside of Quito, Ecuador (*Kichwa is a dialect of Quechua spoken in Ecuador). In the case of number, I will focus on emerging evidence that numerical acuity (in the analog magnitude system) and the acquisition of counting knowledge are correlated even in preschoolers. These studies suggest that language acquisition is deeply tied to the development of non-verbal conceptual systems for representing space and number, raising new questions and hypotheses about the roots of this relationship.

LingLang Lunch (2/12/2014): Sohini Ramachandran (Brown University)

A geneticist’s approach to comparing global patterns of genetic and phonemic variation

A longstanding question in human evolution concerns the extent to which differences in language have been a barrier to gene flow among human populations. Human genetic studies often label populations based on the language spoken by sampled individuals, and interpret analyses of genetic variation based on linguistic relationships. However, no study has attempted a joint analysis of genetic and linguistic data. We have analyzed, separately and jointly, phonemes from 2082 languages and genetic data from 246 human populations worldwide. We find interesting parallels between the datasets, and one point of divergence is that languages with fewer neighbors can have large phoneme inventories while geographically isolated populations lose genetic diversity. I am particularly seeking advice and thoughts on how to best analyze these phoneme inventories in concert with the genetic analyses we are conducting.

LingLang Lunch (2/19/2014): Nathaniel Smith (University of Edinburgh)

Building a Bayesian bridge between the physics and the phenomenology of social interaction

What is word meaning, and where does it live? Both naive intuition and scientific theories in fields such as discourse analysis and socio- and cognitive linguistics place word meanings, at least in part, outside the head: in important ways, they are properties of speech communities rather than individual speakers. Yet, from a neuroscientific perspective, we know that actual speakers and listeners have no access to such consensus meanings: the physical processes which generate word tokens in usage can only depend directly on the idiosyncratic goals, history, and mental state of a single individual. It is not clear how these perspectives can be reconciled. This gulf is thrown into sharp perspective by current Bayesian models of language processing: models of learning have taken the former perspective, and models of pragmatic inference and implicature have taken the latter. As a result, these two families of models, though built using the same mathematical framework and often by the same people, turn out to contain formally incompatible assumptions.

Here, I’ll present the first Bayesian model which can simultaneously learn word meanings and perform pragmatic inference. In addition to capturing standard phenomena in both of these literatures, it gives insight into how the literal meaning of words like “some” can be acquired from observations of pragmatically strengthened uses, and provides a theory of how novel, task-appropriate linguistic conventions arise and persist within a single dialogue, such as occurs in the well-known phenomenon of lexical alignment. Over longer time scales such effects should accumulate to produce language change; however, unlike traditional iterated learning models, our simulated agents do not converge on a sample from their prior, but instead show an emergent bias towards belief in more useful lexicons. Our model also makes the interesting prediction that different classes of implicature should be differentially likely to conventionalize over time. Finally, I’ll argue that the mathematical “trick” needed to convince word learning and pragmatics to work together in the same model is in fact capturing a real truth about the psychological mechanisms needed to support human culture, and, more speculatively, suggest that it may point the way towards a general mechanism for reconciling qualitative, externalist theories of social interaction with quantitative, internalist models of low-level perception and action, while preserving the key claims of both approaches.

LingLang Lunch (2/26/2014): Jeff Runner (University of Rochester)

Binding constraints on processing: pronouns are harder than reflexives (In collaboration with Kellan Head, Teach For America, and Kim Morse, University of Rochester)

In this talk I will present the results of a visual world eye-tracking experiment designed to test two claims in the literature: (a) that the binding theory is a set of “linked” constraints as in the classic binding theory (Chomsky 1981) and HPSG’s binding theory (Sag, Wasow & Bender 2003); and (b) that the binding theory applies as an initial filter on processing (Nicol & Swinney 1989, Sturt 2003). Our results instead support two different claims: (a) that the constraint(s) on pronouns and the constraint(s) on reflexives are separate constraints that apply differently and with different timelines, in line with “primitives of binding” theory, Reuland (2001, 2011); and (b) that neither constraint applies as an initial filter on processing, as in Badecker & Straub (2001). In particular the results show clearly that the resolution of the appropriate antecedent for pronouns is delayed compared to that of reflexives. This project started as an examination of the on-line effects of the constraints of the binding theory, developing an approach based on Nicol & Swinney 1989, Badecker & Straub 2001, and Sturt 2003. Recent work, however, implicates the critical role of memory access in reflexive interpretation (Dillon et al. 2013). Thus, I will also try to relate the current results to current models of memory access.

LingLang Lunch (3/12/2014): Livia Polanyi (Stanford University)

Step-wise Discourse Topic Construction (joint work with Trevor Doherty and Katherine Hinton)

In 1975, Harvey Sacks, the pioneering Conversational Analyst, noted that conversation usually flows in a stepwise manner so that whatever is being introduced links up to what has just been talked about in a way that, though we’re far from where we began, as far as anybody knows, a new topic has not been started. In this way, I propose a formalization of the flow of conversation based on a novel approach to Discourse Topic (DT), and to the mechanics of Discourse Coherence, another intuitively useful pre-theoretic construct. DT, a notion often invoked to label stretches of text that seems to “go together”, is treated as an abstract, dynamic phenomenon of various levels of granularity that emerges from feature representations of sequenced individual utterances. Under this approach, Discourse Coherence, is a scalar notion: a next utterance will shift DT either suddenly when a number of features change simultaneously or more gradually when fewer features shift. Minimal shifts thus result in more coherent discourse, while more radical shifts are less so. A model of musical development is used to illustrate both the conversational phenomenon and the representational formalism we use to account for conversational topic flow. Theoretical challenges to the Linguistic Discourse Model (Polanyi et al.) and Structured Discourse Representation Theory (Asher and Lascarides) in which an Open Right Edge Tree of discourse units is used to account for discourse anaphora will be raised but, alas, left unresolved.

LingLang Lunch (3/19/2014): Shiri Lev Ari (Max Planck Institute, Nijmegen)

How expectations influence language processing, and their cognitive and social consequences: The case of processing the language of non-native speakers

Non-native language is less reliable in conveying the speakers’ intentions, and listeners know and expect that. I propose that these expectations of lower competence lead listeners to adjust their processing when listening to non-native speakers by increasing their reliance on top-down processes and sufficing with less detailed processing of the language, but only if they have the cognitive resources to do so. I will first show evidence supporting these claims, and then show that the adjustment to non-native speakers temporarily alters the way all language, including one’s own, is processed. I will end by showing one of the social consequences of the adjustment in processing – better perspective-taking when listening to non-native speakers.

LingLang Lunch (4/2/2014): Sheila Blumstein (Brown University)

Variability and Invariance in Speech and Lexical Processing: Evidence from Aphasia and Functional Neuroimaging

The processes underlying both speaking and understanding appear to be easy and seamless. And yet, speech input is highly variable, the lexical form of a word shares its sound shape with many other words in the lexicon, and often a given word has multiple meanings. The goal of this research is to examine how and in what ways the neural system is, on the one hand, sensitive to the variability in the speech and lexical processing system, and, on the other, how it is able to resolve this variability. To this end, we will review recent research investigating how the perceptual system resolves variability in selecting the appropriate word from its competitors and determining what category a sound belongs to, e.g. [d] or [t], and how different acoustic features of sounds, e.g. [d-t] vs. [s-z], map on to a common abstract feature, e.g. voicing. We will then examine how higher level information sources such as semantic and conceptual information are used in perceiving degraded speech. The implications of these findings will be considered for models of the functional and neural architecture of language.