Our recent PhD graduate Jie Ren just published a paper in Cognition with Uriel Cohen Priva and Jim Morgan. They argue that the argument that the lexicon is underspecified is not robust enough to show in the task types they were using: Speakers were as willing to accept /t/ for /k/ as /k/ for /t/.
Uriel Cohen Priva and Chelsea Sanker have just had their paper published in LabPhon. They show that the using difference-in-difference to measure convergence, though convenient and frequently used, should ultimately be avoided in most situations: Speakers whose performance is close to the mean of the distribution or to their interlocutors are likely to be seen as divergent, and speakers whose performance is far from the mean are likely to appear as convergent. Both effects can lead to finding false evidence for individual differences in convergence.