Tag Archives: Natural language processing

Colloquium (10/17/2012): Eugene Charniak (Brown University)

Bayes’ Law as Psychologically Real

Since the brain manipulates probabilities (I will argue) then it should do so according to Bayes’ Law. After all, it is normative, and Darwin would not expect us to do anything less. Furthermore there is a lot to be learned from taking Bayes seriously. I consider myself a nativist despite my statistical bent, and it tells me how to combine an informative prior with the evidence of our senses – compute the likelihood of the evidence. It then tells us that this likelihood must be a very broad generative model of everything we encounter. Lastly, since Bayes says nothing about how to do any of this, I presume that the computational methods themselves are not learned, they must be innate, and I will argue there seems to be very few options on how this can be done, with something like particle filtering being one of the few. I will illustrate these ideas with work in computational linguistics, both my own and that of others.

LingLang Lunch (12/10/2014): Stefanie Tellex (Brown University)

Natural Language and Robotics

Natural language can be a powerful, flexible way for people to interact with robots. A particular challenge for designers of embodied robots, in contrast to disembodied methods such as phone-based information systems, is that natural language understanding systems must map between linguistic elements and aspects of the external world, thereby solving the so-called symbol grounding problem. This talk describes a probabilistic framework for robust interpretation of grounded natural language, called Generalized Grounding Graphs (G^3). The G^3 framework leverages the structure of language to define a probabilistic graphical model that maps between elements in the language and aspects of the external world. It can compose learned word meanings to understand novel commands that may have never been seen during training. Taking a probabilistic approach enables the robot to employ information-theoretic dialog strategies, asking targeted questions to reduce uncertainty about different parts of a natural language command. By inverting the model, the robot can generate targeted natural language requests for help from a human partner. This approach points the way toward more general models of grounded language understanding, which will lead to robots capable of building world models from both linguistic and non-linguistic input, following complex grounded natural language commands, and engaging in fluid, flexible dialog with their human partners.