Author Archives: LLL

LingLang Lunch (10/24/2018): Kyuwon Moon

Kyuwon Moon received Ph.D. in linguistics from Stanford and is currently an independent scholar. She is interested in the effect of social factors on linguistic variation and specifically prosodic variation. For more information, her website is here.


Feminine voice in workplace: phonetic variation in Seoul women’s speech

This talk explores the role of “feminine voice” in workplace, in contrast to speech in non-work-related settings. While young women’s sweet, friendly voice has always been valued in service industry, it is even more salient in the post-Fordist market of South Korea, where customers are kings. Young female professionals use their polite voices and feminine charms known as aegyo—a term for a manipulated cute and pleasing attitude—as linguistic commodities and construct a “compliant and professional” persona to balance professionalism and femininity.

Based on the data collected from fieldwork in a call center in Seoul, South Korea, I examine the use of two prominent phonetic variables—raising of (o) and LHL% (rising-falling tone in Intonation Phrase final position)—by young female consultants. The acoustic and experimental analysis focus on the stylistic use of these variables on Intonation Phrase (IP) final position, a focal site of prosodic/grammatical structure and pragmatic meaning in Seoul Korean. This talk thus reveals social meanings of phonetic properties of variables, and argues for the necessity of investigating a prominent language-specific site of variation.

Fall 2018 Speaker Schedule

Date Speaker Title
9/5/2018 First day of classes
9/12/2018
9/19/2018
9/26/2018 Mirjam Fried (Charles University in Prague) When main clauses go AWOL: a constructional account of polarity shifts in insubordination
10/3/2018
10/10/2018
10/17/2018
10/24/2018 Kyuwon Moon Feminine voice in workplace: phonetic variation in Seoul women’s speech
10/31/2018 (Susan Goldin-Meadow Colloquium)
11/7/2018 Scott Seyfarth (Ohio State University) Variable external sandhi in a communication-oriented phonology
11/14/2018
11/21/2018 Thanksgiving (university closed)
11/28/2018 Steven Frankland (Princeton University) Structured re-use of neural representations in language and thought
12/5/2018

LingLang Lunch (9/26/2018): Mirjam Fried (Charles University in Prague)

Mirjam Fried is Associate Professor of Department of Linguistics at Charles University in Prague (CUNI). She is interested in the cognitive and functional aspects of language description and analysis. She investigates various aspects of morphology and morphosyntax from both synchronic and diachronic perspectives. For more information, her website is here.


When main clauses go AWOL: a constructional account of polarity shifts in insubordination

The language of spontaneous dialog is an indispensable resource for elucidating the complex patterns of language production and reception (Levinson & Holler 2014). Moreover, the natural state of spoken language is its permanent variability, which makes a systematic description of its properties a real challenge, but at the same time offers an informative window into the ways new patterns and new categories may develop in interactional practice. The process of forming a new linguistic device is also the main concern of this talk, addressing the general question of how language users may recruit existing grammatical resources in order to create new linguistic patterns with new functions. I pursue the hypothesis that grammatical change originates in the interplay between a specific item and a concrete environment in which it is used and that the interaction helps shape the kind of change that eventually results.

Using material from the spoken corpora of the Czech National Corpus, I will illustrate these issues through a particular case so far largely untouched in relevant research: the usage of the word jestli ‘if/whether’ not in its etymologically motivated function as a syntactic complementizer (as in Nikdo neví, jestli to Martin udělá ‘Nobody knows if Martin will do it’) but in one of its non-propositional functions of expressing a subjective guess about something being likely (1) or unlikely (2); note also that the lexeme (in bold) tends to be phonetically reduced, sometimes quite drastically (1):

(1) esi vona nečekala na telefon

‘[I don’t’ know for sure but I think] she may’ve been waiting for a phone call.’

(lit. ‘if/whether she didn’t wait for a phone call’)

(2) jesi vůbec tam maj ňáký dřevo na topení

‘[I don’t’ know for sure but I don’t think] they many not have any wood to burn’.

(lit. ‘if/whether they have any wood at all for burning’)

These patterns exemplify one type of a cross-linguistically wide-spread and well-attested phenomenon known as insubordination (Evans 2007, 2009; Evans & Watanabi 2016), whereby an erstwhile subordinate clause introduced by a dedicated subordinating complementizer retains its form but loses its main clause and develops new conventional meanings. In this talk, I will concentrate on the cluster of questions concerning the gradual loss of the main clause (full clause > lexically fixed reduced clause > discourse particle > 0), specifically zeroing in on the resulting polarity patterns in the free-standing jestli-clauses; the use of negation is observably different from the regular syntactic counterparts. I suggest that the origins and development of insubordination must be analyzed primarily as an issue of discourse organization rather than from a purely syntactic perspective (such as loss of a paratactic structure or simple ellipsis of main clause), but with consequences for their syntactic behavior as well.

The analysis speaks to both typological and theoretical concerns. (i) It confirms that this subset of jestli-insubordination in conversational Czech can be related to the typology proposed by Evans in two of the three general categories: expressing a broad spectrum of modal meanings (here, subjective epistemic assessment, as in 1-2) and signaling presupposed material (negation and disagreement in 2). And (ii) from a broader theoretical perspective, insubordination makes a case for a particular approach to grammatical description, namely, one that takes into account both internal features of linguistic units and a ‘holistic’ perspective on specific conventionalized constellations of linguistic units. This multi-dimensional view is the basic conceptual tenet of constructional approaches and allows naturally for integrating both compositional and non-compositional properties of linguistic patterns.

Colloquium (10/17/2012): Eugene Charniak (Brown University)

Bayes’ Law as Psychologically Real

Since the brain manipulates probabilities (I will argue) then it should do so according to Bayes’ Law. After all, it is normative, and Darwin would not expect us to do anything less. Furthermore there is a lot to be learned from taking Bayes seriously. I consider myself a nativist despite my statistical bent, and it tells me how to combine an informative prior with the evidence of our senses – compute the likelihood of the evidence. It then tells us that this likelihood must be a very broad generative model of everything we encounter. Lastly, since Bayes says nothing about how to do any of this, I presume that the computational methods themselves are not learned, they must be innate, and I will argue there seems to be very few options on how this can be done, with something like particle filtering being one of the few. I will illustrate these ideas with work in computational linguistics, both my own and that of others.

LingLang Lunch (9/25/2012): Geoffrey K. Pullum (University of Edinburgh)

Psychology and the Claimed Infinitude of Sentences

Some linguists take it to be an established universal that human languages have infinitely many sentences. I explore their reasons for believing this. I argue that no evidence could support or refute the infinitude claim; no convincing argument has been advanced for its truth; no consequences would follow from it; it might not be universally true anyway; and there are no significant consequences for psychology if that is the case. I focus especially on the supposed link between the infinitude claim and “creative” human cognitive abilities such as being able to come up with new utterances that are appropriate to context.

LingLang Lunch (10/10/2012): Junwen Lee (Brown University)

A Unitary Analysis of Colloquial Singapore English Lah

The linguistic function of the Colloquial Singapore English (CSE) particle lah has been characterized variously as a marker to convey solidarity, warmth and informality; an attenuation or emphasis marker; an assertion marker; and an accommodation marker. As the particle can be pronounced using several pitch contours, the particle has generally been analyzed as either a set of homonymic variants that are distinguished by pitch and function, or a unitary particle that has the same meaning despite tonal differences. However, I argue against both approaches – the former conflates pragmatic function and semantic meaning, while the latter ignores the systematic differences in function that correlate with tonal differences. Instead, using a relevance-theoretic framework, I propose that the different pragmatic functions of lah result from the interaction between its unitary semantic meaning and the effect of pitch as signals of modality, specifically a falling tone that marks declaratives/imperatives and a rising tone that marks interrogatives. The advantages of this approach are also discussed in relation to another CSE particle hor, which similarly differs in pragmatic function depending on whether it is pronounced with a falling or rising tone.

LingLang Lunch (10/31/2012): Peter Graff (MIT)

Communicative Efficiency in the Lexicon

Some of the earliest as well as some of the most recent work on the role of communicative efficiency in natural language examined the patterning of word-length in the lexicon (Zipf 1949; Piantadosi et al. 2011). Frequent and predictable words tend to be phonologically shorter, while their infrequent and unpredictable counterparts tend to be longer, thus relativizing the articulatory effort invested by the speaker to the probability of her being misunderstood. In this talk, I show that it is not only word-length but also the actual phonological composition of words that facilitates the successful communication of intended messages. I show that the English lexicon is probabilistically organized such that the number of words that rely exclusively on a given contrast for distinctness follows from that contrast’s perceptibility (cf. Miller and Nicely 1955) beyond what is expected from the occurrence frequencies of the contrasting sounds. For example, there are more minimal pairs like pop:shop, which rely on the highly perceptible /p/:/ʃ/ opposition in the English lexicon than expected from the frequencies of /p/ and /ʃ/. Conversely, there are fewer minimal pairs like fought:thought, which rely on the confusable /f/:/θ/ contrast, than expected from the frequencies of /f/ and /θ/. Redundancy in the phonological code is thus not randomly distributed, but exists to supplement imperceptible distinctions between meaningful linguistic units as needed. I also show that English is not unique in this respect: across 60 languages, the perceptibility of a given contrast predicts the extent to which words in the lexicon rely on that contrast for distinctness. I argue that these patterns arise from the fact that speakers choose among words in ways that accommodate anticipated mistransmission (Mahowald et al. to appear) and present computational evidence in favor of the hypothesis that the global optimization of the phonological lexicon could have arisen from the aggregate effects of such word choices over the course of a language’s history (cf. Martin 2007).

Colloquium (11/1/2012): Terry Au (University of Hong Kong)

Access to Childhood Language Memory

All adults seem to have amnesia about much that happened in their childhood. Does early memory simply wither away through massive synaptic pruning and cell death in early brain development? Or, is it just masked by interference from later experience? This talk explores these questions in the specific case of childhood language memory. Research into the re-learning of long-disused childhood languages turns out to have much to offer. It provides relatively objective evidence for access to early childhood memory in adulthood via re-learning. It complements linguistic deprivation research to highlight the special status of childhood language experience in phonology and morphosyntax acquisition. It thereby suggests a strategy to salvage seemingly forgotten childhood languages, which are often also heritage languages. Equally importantly, re-learning childhood languages may well open a window onto how language affects cognitive development not only during, but also well beyond, the childhood years.

LingLang Lunch (11/14/2012): Brian Dillon (University of Massachusetts Amherst)

Syntactic complexity across the at-issue / not-at-issue divide

Much work in psycholinguistics has been dedicated to uncovering the source of complexity effects in syntactic processing (Chomsky & Miller 1963; Gibson, 1998; Levy, 2007; Lewis, 1996; Lewis & Vasishth, 2005; Yngve, 1960; i.a.). There are many theoretical accounts of syntactic complexity effects, starting from Chomsky and Miller’s (1963) observations on the difficulty of self-embedding, to the introduction of new discourse referents while simultaneously maintaining syntactic predictions (Gibson, 1998), among many others. One recent and influential model attempts to reduce syntactic complexity to interference effects related to memory retrieval (Lewis & Vasishth, 2005). In the present talk I present joint work with Lyn Frazier and Chuck Clifton that investigates the source of syntactic complexity by looking how the at-issue / not-at-issue distinction relates to syntactic complexity effects. Not-at-issue content like appositives and parentheticals do not directly contribute to the truth conditions of a sentence, and so have been argued to form a separate ‘dimension’ of meaning (Potts, 2005). In a series of judgment experiments, it is seen that syntactic complexity in the not-at-issue dimension does not lead to complexity effects in offline judgments, while complexity in at-issue content does. I then present eye-tracking data that helps to locate the source of the complexity effects in online comprehension. The results provide initial evidence that i) the parser distinguishes at-issue and not-at-issue content, and ii) the complexity effects observed in the present data cannot be reduced to retrieval interference. I suggest that at-issue / not-at-issue distinction is used to structure parsing routines by maintaining distinct stacks for different types of linguistic content, thereby minimizing complexity for the sentence as a whole.

LingLang Lunch (5/8/2013): Kathryn Davidson (University of Connecticut)

What can sign languages tell us about the semantic/pragmatic interface?

As adult language users, we are all aware that sometimes we mean exactly what we say, and sometimes we mean a lot more. Understanding precisely how language meaning arises from the complex interplay of semantics (what we say) and pragmatics (what we mean) is a difficult question. In this talk, I will focus on two phenomena at the semantic/pragmatic interface: scalar implicatures and the restriction of quantifier domains, from the point of view of American Sign Language (ASL), gaining new insights into the relationship of semantics and pragmatics based on the behavior of ASL. In the case of scalar implicatures, ASL makes frequent use of general use coordinators instead of separate lexical items “and” and “or,” which I show leads to strikingly fewer exclusive interpretations of disjunction than a lexically contrasting scale like English . In the case of quantifier domains, the gradient use of vertical space in ASL can provide clearer judgments about domains for quantification than the gradient options available in spoken languages, such as intonation. In both cases, I show how the manual/visual language modality allows linguists, philosophers, and psychologists to test important issues concerning the relationship of semantics and pragmatics in natural languages.