Are Large Language Models Simply Causal Parrots?

February 27th, 2024 (In-Person)


Food for Thoughts on LLMs and Causal Reasoning by

Judea Pearl is a Professor of Computer Science at the University of California, Los Angeles (UCLA) and winner of the 2011 A. M. Turing Award for his “fundamental contributions to artificial intelligence.” Judea has pioneered a counterfactual notion to causality whose formalism is the de facto standard for causal AI/ML research. Judea's technical books on probabilistic and causal reasoning have been cited well over 40 thousand times, while the recent "The Book of Why" has popularized ideas from the Pearlian notion of causality in laymen terms for a global audience spanning multiple languages. In a recent interview with the American Statistical Association, Judea discussed new ideas and thoughts regarding causal inference, its promises and limitations, in the light of recent advances with Large Language Models.

Invited Speakers on Research in LLMs x Reasoning:

Emre Kıcıman is a Senior Principal Researcher at Microsoft Research. Previously, Emre completed a PhD in Computer Science at Stanford University. Emre's research interests span causal inference, machine learning, and AI’s implications for people and society. Recent strides include Large Language Models and their causal inference capabilities.
Andrew Lampinen is a Senior Research Scientist at DeepMind. Previously, Andrew completed a PhD in Cognitive Psychology at Stanford University. Prior to that, Andrew's background is in mathematics, physics, and machine learning. Andrew's research interests concern cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment. Recent strides include Large Language Models and their cognitive capabilities.
Guy Van den Broeck is a Professor and Samueli Fellow at the Computer Science department at University of California, Los Angeles (UCLA). Previously, Guy completed a PhD in Computer Science at KU Leuven. The goal of Guy's research focuses on developing models that can address structured problems as well as deal with uncertainty in machine learning. To this end, Guy studies statistical relational learning methods, tractable probabilistic models and neuro-symbolic AI. Recent strides include Large Language Models and their logical reasoning capabilities.