Evening Lectures

Cognitive biases in learning and generalization shape language
Jennifer Culbertson (University of Edinburgh)
28 July

Abstract: One of the most controversial hypotheses in linguistics concerns the role of individual-level cognition in driving similarities across languages. Many proposed "universal" features of language and the mechanisms proposed to underlie them are supported by less than robust empirical evidence. In this talk, I present a series of studies aimed at providing new sources of evidence for classic language universals related to word order. The first targets word order harmony: the tendency for syntactic heads and dependents to align across phrases within a language. I show that harmony is favoured both in learning and generalization, suggesting that a domain-general bias for simplicity drives this well-studied universal. I then turn to a more complex pattern of ordering present in noun phrases, which has been proposed to derive from constraints on syntactic representations. Experimental and corpus-based evidence suggest an alternative explanation of this pattern, again driven by learning and generalization, but crucially dependent on meaning and conceptual structure.

Bio: Jennifer Culbertson is a Reader and Director of the Centre for Language Evolution at the University of Edinburgh. Her research focuses on understanding how languages are shaped by learning and use. She is interested in how typological universals (differences in the frequency of linguistic patterns across the world's languages) arise from properties of our cognitive system. To get at this, she teaches people (children and adults) miniature artificial languages, and create computational models of their behaviour.


Categorial Grammar + Distributional Semantics = Quantum NLP
Stephen Clark (Cambridge Quantum Computing)
R.T. Oehrle Memorial lecture
4 August

Abstract: The talk will be in three parts. First I will describe some recent work applying neural techniques to Combinatory Categorial Grammar parsing, showing how a fine-tuned BERT model can be used to produce substantial improvements over the previous generation of CCG parsers and supertaggers. These results can also be seen as a reflection of the recent improvements in NLP more generally. Second I will describe a research program which aims to marry the type-driven aspects of categorial grammar with the distributional aspects of neural networks, resulting in tensor networks for sentences with the same shape as the underlying syntax. And finally I will describe recent work from Cambridge Quantum, demonstrating how these tensor networks can be converted into quantum circuits and run on a quantum computer, opening the way for a future discipline of quantum NLP.

Bio: Stephen Clark is Head of AI at Cambridge Quantum Computing, and an Honorary Professor at Queen Mary University of London. Previously he served on the Faculties of the Universities of Oxford and Cambridge, and before joining CQC was a Senior Staff Research Scientist at DeepMind in London. He holds a PhD in Computer Science and Artificial Intelligence from the University of Sussex, and a BA in Philosophy from the University of Cambridge (Gonville and Caius College). Much of his research has been concerned with the syntactic and semantic analysis of natural language, which he currently investigates in the context of quantum computing.


How does the brain beget the mind?
Christos Papadimitriou (Columbia University)
11 August

Abstract: How does the brain beget the mind? How do molecules, cells and synapses effect reasoning, intelligence, planning, language? Despite dazzling progress in experimental neuroscience, as well as in cognitive science at the other extreme of the scale, we do not seem to be making progress in the overarching question: the gap is huge and a completely new approach seems to be required. As Richard Axel recently put it: "We don't have a logic for the transformation of neural activity into thought [...]."

What kind of formal system would qualify as this "logic"?

I will introduce the Assembly Calculus (AC), a computational system whose basic data structure is the assembly -- assemblies are large populations of neurons representing concepts, words, thoughts, etc. --, and whose semantics and implementation entail a dynamical system of randomly interconnected spiking neurons with plasticity and inhibition. The AC appears to bridge the gap between neurons and cognition, I will also discuss recent progress on implementing language in the brain through the AC.

Bio: Christos Harilaos Papadimitriou is the Donovan Family Professor of Computer Science at Columbia University. Before joining Columbia in 2017, he was a professor at UC Berkeley for the previous 22 years, and before that he had taught at Harvard, MIT, NTU Athens, Stanford, and UCSD. He has written five textbooks and many articles on algorithms and complexity, and their applications to optimization, databases, control, AI, robotics, economics and game theory, the Internet, evolution, and the brain. He holds a PhD from Princeton (1976), and eight honorary doctorates, including from ETH, University of Athens, EPFL, and Univ. de Paris Dauphine. He is a member of the National Academy of Sciences of the US, the American Academy of Arts and Sciences, and the National Academy of Engineering, and he has received the Knuth prize, the Gödel prize, the von Neumann medal, as well as the 2018 Harvey Prize by Technion. In 2015 the president of the Hellenic republic named him commander of the order of the Phoenix. He has also written three novels: "Turing", "Logicomix" and his latest "Independence."