“Join in the effort to discover and discuss language models
capable of reasoning causally about the world.”
AAAI Site for LLM-CP Workshop (Click Here)
Understanding causal interactions is central to human cognition and thereby a central quest in science, engineering, business, and law. Developmental psychology has shown that children explore the world in a similar way to how scientists do, asking questions such as ``What if?'' and ``Why?''. AI research aims to replicate these capabilities in machines. Deep learning in particular has brought about powerful tools for function approximation by means of end-to-end traininable deep neural networks. This capability has been corroborated by tremendous success in countless applications. One of these successes are Large Language Models (LLMs). However, their lack of interpretability and the indecisiveness of the community in assessing their reasoning capabilities prove to be a hindrance towards building systems of human-like ability. Therefore, both enabling and better understanding causal reasoning capabilities in such LLMs is of critical importance for research on the path towards human-level intelligence. The Pearlian formalization to causality has revealed a theoretically sound and practically strict hierarchy of reasoning that serves as a helpful grounding of what LLMs can achieve.
Our aim is to bring together researchers interested in identifying to which extent we can consider the output and internal workings of LLMs to be causal. Ultimately, we intend to answer the question: ''Are LLMs Causal Parrots or can they reason causally?''.
Organizers
- Matej Zečević (TU Darmstadt)
- Amit Sharma (Microsoft)
- Lianhui Qin (UCSD)
- Devendra Singh Dhami (TU/e, hessian.AI, TU Darmstadt [CAUSE])
- Aleksander Molak (CausalPython)
- Kristian Kersting (advisor, TU Darmstadt, hessian.AI, DFKI)
For inquiries / questions concerning the workshop contact: llm-cp@googlegroups.com.