Tag Archives: Neurolinguistics

LingLang Lunch (9/18/2013): Eva Wittenberg (Tufts University)

Close but no cigar: The differences between kissing, giving kisses, and giving other things

Light verb constructions, such as “Julius is giving Ellie a kiss”, create a mismatch at the syntax-semantics interface. Typically, each argument in a sentence corresponds to one semantic role, such as in “Julius is giving Ellie a present”, where Julius is the Source, Ellie the Goal, and the present the Theme. However, a light verb construction such as “Julius is giving Ellie a kiss” with three arguments describes the same event as the transitive “Julius kissed Ellie” with two arguments: Julius is the Agent, and Ellie the Patient.
This leads to several questions: First, how are light verb constructions such as “giving a kiss” processed differently from sentences such as “giving a present” ? Second, at which structural level of representation would we find sources of this difference? Third, what is the effect of using a light verb construction such as “giving a kiss” as opposed to “kissing” on the event representation created in a listener? I will present data from an ERP study, an eye-tracking study, and several behavioral studies to answer these questions.

LingLang Lunch (4/2/2014): Sheila Blumstein (Brown University)

Variability and Invariance in Speech and Lexical Processing: Evidence from Aphasia and Functional Neuroimaging

The processes underlying both speaking and understanding appear to be easy and seamless. And yet, speech input is highly variable, the lexical form of a word shares its sound shape with many other words in the lexicon, and often a given word has multiple meanings. The goal of this research is to examine how and in what ways the neural system is, on the one hand, sensitive to the variability in the speech and lexical processing system, and, on the other, how it is able to resolve this variability. To this end, we will review recent research investigating how the perceptual system resolves variability in selecting the appropriate word from its competitors and determining what category a sound belongs to, e.g. [d] or [t], and how different acoustic features of sounds, e.g. [d-t] vs. [s-z], map on to a common abstract feature, e.g. voicing. We will then examine how higher level information sources such as semantic and conceptual information are used in perceiving degraded speech. The implications of these findings will be considered for models of the functional and neural architecture of language.

Colloquium (4/29/2015): Gregory Hickok (University of California, Irvine)

An Integrative Approach to Understanding the Neuroscience of Language

Language serves a specialized purpose: to translate thoughts to sound (or sign) and back again. The complexity and relative uniqueness of linguistic knowledge reflects this specialization. But language evolved in the context of a brain that was already performing functions that are broadly important for language: perceiving, acting, remembering, learning. From an evolutionary standpoint, then, we should expect to find some architectural and computational parallels between linguistic and non-linguistic neural systems. Our work has indeed uncovered such parallels. Language processes are organized into two broad neural streams—a ventral auditory-conceptual stream and a dorsal auditory-motor stream—functionally analogous to that found in vision. And the dorsal auditory-motor language stream uses computational principles found in motor-control more broadly. This approach to understanding the neural basis of language does not replace traditional linguistic constructs but integrates them into a broader neuro-evolutionary context and provides a richer, comparative source of data.

Colloquium (11/11/2015): Evelina Fedorenko (Mass General Hospital/Harvard Medical School)

The L​anguage ​N​etwork and ​I​ts ​P​lace within ​t​he ​B​roader Architecture of the Human Mind and Brain

Although many animal species have the ability to generate complex thoughts, only humans can share such thoughts with one another, via language. My research aims to understand i) the system that supports our linguistic abilities, including its neural implementation, and ii) its interfaces with the rest of the human cognitive arsenal. I will begin by introducing the “language network”, a set of interconnected brain regions that support language comprehension and production. With a focus on the subset of this network dedicated to high-level linguistic processing, I will then consider two questions. First, what is the internal structure of the language network? In particular, do different brain regions preferentially process different levels of linguistic structure (e.g., sound structure vs. syntactic/semantic compositional structure)? And second, how does the language network interact with other large-scale networks in the human brain, like the domain-general cognitive control network or the network that supports social cognition? To tackle these questions, I use behavioral, fMRI, and genotyping methods in healthy adults, as well as intracranial recordings from the cortical surfaces in humans undergoing presurgical mapping (ECoG), and studies of patients with brain damage. I will argue that: i) Linguistic representations are distributed across the language network, with no evidence for segregation of distinct kinds of linguistic information (i.e., phonological, lexical, and combinatorial – syntactic/semantic – information) in distinct regions of the network. Even aspects of language that have long been argued to preferentially rely on a specific region within the language network (e.g., syntactic processing being localized to parts of Broca’s area) turn out to be distributed across the network when measured with sufficiently sensitive tools. Further, the very same regions that are sensitive to high-level (e.g., syntactic) structure in language show sensitivity to lower-level (e.g., phonotactic) regularities. This picture is in line with much current theorizing in linguistics and the available behavioral psycholinguistic data that shows sensitivity to contingencies spanning sound-, word- and phrase-level structure. And: ii) The language network necessarily interacts with other large-scale networks, including prominently the domain-general cognitive control system. Nevertheless, the two systems appear to be functionally distinct given a) the differences in their functional response profiles (selective responses to language vs. responses to difficulty across a broad range of tasks), and b) distinct patterns of functional correlations. My ongoing work aims to characterize the computations performed by these systems – and other systems supporting high-level cognitive abilities – in order to understand the division of labor among them during language comprehension and production.

LingLang Lunch (3/9/2016): Emily Myers (University of Connecticut)

Non-Native Speech Sound Learning: Studies of Sleep, Brain, and Behavior

Speech perception is subject to critical/sensitive period effects, such that acquisition of non-native (L2) speech sounds is far more difficult in adulthood than in childhood. Although adults can be trained to perceive differences among speech sounds that are not part of their native language, success is (1) variable across individuals, (2) variable across specific sounds to be learned, and (3) training may or may not generalize to untrained instances. Any theory of L2 speech perception must explain these three phenomena. Accounts of the L2 speech learning process have drawn from traditions in linguistics, psychology, and neuroscience, yet a full description of the barriers to perceptual learning of L2 sounds remains elusive. New evidence from our lab suggests that training on non-native speech produces plastic effects in the brain regions involved in native-language perception, and that consolidation during sleep plays a large role in the degree to which training is maintained and generalizes to new talkers. Further, similar mechanisms may be at play when listeners learn to perceive non-standard tokens in the context of accented speech. Taken together, these findings suggest that speech perception is more plastic than critical period accounts would predict and that individual variability in brain structure and sleep behavior may predict some of the variability in ultimate L2 sound acquisition success.

LingLang Lunch (4/20/2016): Eiling Yee (University of Connecticut)

Putting Concepts in Context

At first glance, conceptual representations (e.g., our internal notion of the object lemon) seem static. That is, we have the impression that there is something that lemon “means” (a sour, yellow, football-shaped, citrus fruit) and that this meaning does not vary. Research in semantic memory has traditionally taken this “static” perspective. In this talk I will describe studies that challenge this perspective by showing that the context that an individual brings with them (via their current goals, recent experience, long-term experience, or neural degeneration) influences the cognitive and neural instantiations of object concepts. I will argue that our findings support models of semantic memory in which rather than being static, conceptual representations are dynamic and shaped by experience.

Colloquium (12/7/2016): Gerry Altman (University of Connecticut)

The challenges of event cognition: Object representation at the interface of episodic and semantic memory

Language is often used to describe the changes that occur around us – changes in either state (“I cracked the glass…”) or location (“I moved the glass onto the table…”). To fully comprehend such events requires that we represent the ‘before’ and ‘after’ states of any object that undergoes change. But how do we represent these mutually exclusive states of a single object at the same time? I shall summarize a series of fMRI studies which show that these alternative states compete with one another in much the same way as alternative interpretations of an ambiguous word might compete. This interference, or competition, manifests in a part of the brain that has been implicated in resolving competition. Moreover, activity in this area is predicted by the dissimilarity, elsewhere in the brain, between sensorimotor instantiations of the described object’s distinct states. Connectivity analyses show that hippocampus is also implicated in these cases of language/event comprehension, as a function of when episodic or semantic knowledge must be accessed. I shall end with the beginnings of a new account of event representation which does away with the traditional distinction between actions and participants, which maintains instead that object state representations across time are the fundamental representational primitive of event cognition, and which addresses how we instantiate individuated objects (tokens) from semantic memory (about types) on-the-fly. [Prior knowledge of the brain is neither presumed, required, nor advantageous!].