Neuro Causal and Symbolic AI

Workshop @ the 36th
Neural Information Processing Systems (NeurIPS) Conference

December 9th, 2022 - Virtual


“Join in the effort to discover and discuss the next-generation of learning systems
capable of reasoning causally about the world.”

NeurIPS Virtual Site for nCSI Workshop (Click Here)

Understanding causal interactions is central to human cognition and thereby a central quest in science, engineering, business, and law. Developmental psychology has shown that children explore the world in a similar way to how scientists do, asking questions such as “What if?” and “Why?” AI research aims to replicate these capabilities in machines. Deep learning in particular has brought about powerful tools for function approximation by means of end-to-end traininable deep neural networks. This capability has been corroborated by tremendous success in countless applications. However, their lack of interpretability and reasoning capabilities prove to be a hindrance towards building systems of human-like ability. Therefore, enabling causal reasoning capabilities in deep learning is of critical importance for research on the path towards human-level intelligence. First steps towards neural-causal models exist and promise a vision of AI systems that perform causal inferences as efficiently as modern-day neural models. Similarly, classical symbolic methods are being revisited and reintegrated into current systems to allow for reasoning capabilities beyond pure pattern recognition. The Pearlian formalization to causality has revealed a theoretically sound and practically strict hierarchy of reasoning that serves as a helpful benchmark for evaluating the reasoning capabilities of neuro-symbolic systems.

Our aim is to bring together researchers interested in the integration of research areas in artificial intelligence (general machine and deep learning, symbolic and object-centric methods, and logic) with rigorous formalizations of causality with the goal of developing next-generation AI systems.

Main Sponsor of the workshop is the European Laboratory for Learning and Intelligent Systems (short ELLIS) network. Within ELLIS the workshop is recognized as part of the Programme for "Semantic, Symbolic and Interpretable Machine Learning".

Organizers

Area Chairs (Meta-Reviewers)

David Poole, Niklas Pfister, Sriraam Natarajan, Luís Lamb, Artur Garcez, Kristian Kersting, Vaishak Belle

Program Committee (Reviewers)

Selected Top Reviewers (*)
Nandini Ramanan, Sarah Wu, Harsha Kokel, Alexander Hägele, Hikaru Shindo, Patrick Schramowski, Phillip Lippe, Julius von Kügelgen, Siwen Yan, Davide Talon, Angshuk Dutta, Jonas Seng, Toon Vanderschueren, Adele Ribeiro, Johann Gaebler, Moritz Willig (*), Nick Pawlowski, Karl Stelzner, Sebastian Steindl, Kevin Xia, Aleksandra Nowak, Viktor Pfanschilling, Arseny Skryagin, Wolfgang Stammer, Pim De Haan (*), Leonard Henckel, Hamed Nilforoshan, Athresh Karanam, Vaibhav Agrawal, Florian Busch (*), Quentin Delfosse, Umesh Gupta