Probabilistic Semantics and Inference Under Uncertainty in Natural Language

LaCo Introductoryweek 3 each day

Jean-Philippe Bernardy (University of Gothenburg)
Aleksandre Maskharashvili (Ohio State University)


Abstract: An important aspect of human reasoning is to process underspecified information expressed in natural language (here, underspecified means that not enough information is available to make categorical judgments, like ones in logic). Even so, we, humans, are still able to draw conclusions based on underspecified information. While studying probabilistic inference in natural language has a long tradition in logic, linguistics, and philosophy, there is a need for development of a coherent computational approach to it. In the lectures, we will offer a theory of inference under uncertainty (underspecified information) and its computational implementation. We offer a framework based on a Bayesian probabilistic semantics: a inference under uncertainty is computed as a probabilistic inference. In particular, the conclusion is evaluated as a probability that it holds under the constraints imposed by premises. This theory will find concrete illustration, via a system built over the the probabilistic programming paradigm.