I’ll present a medley of case studies on this question, which hopefully will make for some interesting discussion. I’ll begin with a computational study on the syntax of five languages: do the grammars of these languages order information in such a way that makes the language easier to process than expected by chance (Gildea & Jaeger, 2015)? I then present work on miniature artificial language learning to show that the biases we observe in the first study operate during language acquisition, and that they are strong enough to bias learners to deviate from the input language towards languages that are easier to process and encode information more efficiently (Fedzechkina, Jaeger, & Newport, 2012; Fedzechkina, Newport, & Jaeger, 2016; Fedzechkina & Jaeger, under review). Time permitting, I’ll also show how related biases might cause change within a speaker’s production through that speaker’s life time (suggesting a second path through which language processing can affect language change, Buz, Tanenhaus, & Jaeger, 2016). Alternatively, I can show how adaptive processes during language understanding continuously reshape our linguistic representations throughout our life (Fine, Jaeger, Farmer & Qian, 2013; Kleinschmidt & Jaeger, 2015), including the acquisition of new (e.g., dialectal) syntax (Fraundorf & Jaeger, under review). Come prepared to vote (and to be over voted).
Starting at the level of long-term language change, we find that the number of minimal lexical pairs that a phoneme contrast distinguishes strongly predicts whether a change to that phoneme contrast preserves or eliminates lexical distinctions. Specifically, phoneme contrasts that distinguish few minimal pairs are more likely to merge (a change that eliminates lexical distinctions), while those that distinguish many minimal pairs are more likely to participate in chain-shifts or phoneme splits (changes that preserve lexical distinctions).
In one proposed mechanism for this effect, hyperarticulation of phonetic cues distinguishing words creates within-category, ‘cryptic’ variation in phoneme categories, which in turn shapes future patterns of sound change. At the level of usage, this model predicts that we should find hyperarticulation of phonetic cues that provide more information distinguishing their host word from a competitor. In support of this prediction, I show evidence that in a corpus of natural English speech, two distinct types of phonetic cues, voice-onset-time, and vowel-vowel Euclidean distance, are hyperarticulated when they distinguish their host word from a minimal pair competitor (e.g., pat ~ bat). Taken together, these results provide strong converging evidence that hyperarticulation of phonetic cues to lexical meaning in usage indirectly promotes maintenance of a communicatively efficient system of phoneme contrasts over time.
Convergence varies across individuals, but each individual has some consistency in convergence exhibited in her conversations, compared across measures, both when interacting with different partners and when undertaking different tasks with the same partner. This correlation was present in phonological measures (vowel formants) and prosodic measures (intensity, pitch, phonation), but was not significant for turn-taking and speech rate patterns.
Convergence varies across measures, but there was no significant correlation between convergence in different measures; patterns exhibited by a speaker in one measure are not predictive of her patterns in other measures.
These results indicate that convergence results in one measure will not necessarily be representative of what would be found in other measures, which has implications for designing convergence research and for interpreting results. Moreover, it suggests that the process underlying convergence in different characteristics is not equivalent, but may be mediated by individual differences in attention or other aspects of phonological processing or storage.