What can sign languages tell us about the semantic/pragmatic interface?
As adult language users, we are all aware that sometimes we mean exactly what we say, and sometimes we mean a lot more. Understanding precisely how language meaning arises from the complex interplay of semantics (what we say) and pragmatics (what we mean) is a difficult question. In this talk, I will focus on two phenomena at the semantic/pragmatic interface: scalar implicatures and the restriction of quantifier domains, from the point of view of American Sign Language (ASL), gaining new insights into the relationship of semantics and pragmatics based on the behavior of ASL. In the case of scalar implicatures, ASL makes frequent use of general use coordinators instead of separate lexical items “and” and “or,” which I show leads to strikingly fewer exclusive interpretations of disjunction than a lexically contrasting scale like English . In the case of quantifier domains, the gradient use of vertical space in ASL can provide clearer judgments about domains for quantification than the gradient options available in spoken languages, such as intonation. In both cases, I show how the manual/visual language modality allows linguists, philosophers, and psychologists to test important issues concerning the relationship of semantics and pragmatics in natural languages.
Language-Thought Interactions in Development
How do language and thought influence each other during development? Drawing on the cases of spatial and numerical cognition, I will discuss recent work from my lab exploring this question. For both cases, I will show evidence of interesting language-thought correspondences that raise questions about the mechanisms through which language and cognition become linked. In the case of space, I will focus on three studies exploring the hypothesis that acquiring frame-of-reference terms (left-right, north-south) causally affects spatial representation in three different populations: English-speaking preschoolers, two cohorts of Nicaraguan Sign Language users, and Kichwa-speaking adults outside of Quito, Ecuador (*Kichwa is a dialect of Quechua spoken in Ecuador). In the case of number, I will focus on emerging evidence that numerical acuity (in the analog magnitude system) and the acquisition of counting knowledge are correlated even in preschoolers. These studies suggest that language acquisition is deeply tied to the development of non-verbal conceptual systems for representing space and number, raising new questions and hypotheses about the roots of this relationship.
Using Non-Language to Understand Language
Communicative systems crucially depend on the fact that they are shared between those who send signals and those who receive them. How did this shared-ness come about? Specifically, are producers and comprehenders subject to the same sets of heuristics when creating a communication system de novo? Here, I explore these questions by asking naïve participants (hearing non-signers) to describe or simple events in pantomime, to comprehend pantomimed descriptions, or both. By initially segregating production from comprehension, we can establish a clearer foundation for understanding the (tacit or explicit) negotiations that take place during dynamic communicative interaction. I will summarize the results of several experiments on pantomime production, comprehension, and dynamic interaction, and will suggest that these findings can help us better understand the nonlinguistic origins from which grammar develops.
Keeping the hands in mind: Executive function and implicit learning in deaf children
The hands can reveal a lot about the mind. In particular, sign language manifests the human capacity for language in a distinct way, and provides unique opportunities to ask both basic and translational questions about language and cognition. In this talk, I look to Deaf native signers as a way of testing recent claims about the impact of auditory deprivation on cognitive development in two domains: executive function and implicit learning. Results are inconsistent with the auditory deprivation hypothesis, but consistent with the language deprivation hypothesis. I’ll then consider the translational implications of these findings, identify remaining gaps in our empirical knowledge, and discuss my plans for addressing those gaps.