ECSQARU-25
StaRAI: From a Probabilistic Propositional Model to a Highly Compressed Probabilistic Relational Model at ECSQARU 2025
The 18th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Taking place on 23.09.2025, 9 am.
Introduction
Our surrounding world is inherently uncertain and relational. The field of Statistical Relational AI (StaRAI) has emerged to account for both uncertainty and relational modelling, for example in probabilistic graphical models. StaRAI explicitly encodes objects and relations in probabilistic models, which enables algorithms to exploit repeated structures, i.e., subgraphs with matching associated probability functions as well as identical graph structure, for efficiency gains during inference.
To exploit repeated structures, they have to be identified. While such repeated structures frequently occur in many practical applications, they are generally not explicitly represented in a learned model and thus cannot be exploited by inference algorithms. It is therefore crucial to efficiently identify and compactify these structures to enable efficient inference. A belief propagation scheme can be used to identify repeated structures. The structures can then be used to compress the model, which results in a significant reduction in storage requirements. In addition, inference algorithms no longer run in time exponential, but only polynomial, w.r.t. so-called domain sizes.
This tutorial provides a look at recent advances in the field of computing a highly compressed probabilistic relational model from a given probabilistic propositional model. We consider how the compactification of a probabilistic propositional model can efficiently be realised and how the resulting compressed representation is applied to speed up inference. Furthermore, we discuss the approximation of a compressed representation, give error bounds for the induced approximation error as well as investigate how to obtain a compressed model for a given error bound, and take a look at an application example in the field of causal inference.
Presenter
Target Audience, Prerequisite Knowledge, and Learning Goals
The tutorial will be mostly self-contained. While we assume familiarity with probabilistic graphical models, like Bayesian networks and factor graphs, we will revisit all necessary definitions. The tutorial is therefore potentially interesting for all researchers interested in reasoning under uncertainty.
The goal of this tutorial is two-fold:
- To provide an overview of recent developments in compactifying probabilistic graphical models and
- to discuss interesting outcomes and opportunities of the compactification of probabilistic graphical models.
Agenda
The tutorial starts at 9 am.
- Introduction [Marcel, 50 min, slides]
- Relational models under uncertainty
- Obtaining a compressed representation
- Compressing probabilistic relational models [Malte, 90 min; 30 mins break after 40 mins, slides]
- Advancing the state of the art to obtain an exact compressed representation
- Approximating a compressed representation with known error bounds
- Handling unknown factors
- Application: Lifted Causal Inference [Malte, 30 min, slides]
- Lifted computation of causal effects
- Lifted computation of causal effects with partial causal knowledge
- Summary [Marcel, 10 min, slides]
Related Publications
- Babak Ahmadi, Kristian Kersting, Martin Mladenov, and Sriraam Natarajan. Exploiting Symmetries for Scaling Loopy Belief Propagation and Relational Training. Machine Learning, 92:91–132, 2013.535
- Tanya Braun and Ralf Möller: Parameterised Queries and Lifted Query Answering. In: IJCAI-18 Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018.
- Luc De Raedt, Angelika Kimmig, and Hannu Toivonen: ProbLog: A Probabilistic Prolog and its Application in Link Discovery. In: IJCAI-07 Proceedings of 20th International Joint Conference on Artificial Intelligence, 2007.
- Norbert Fuhr: Probabilistic Datalog - A Logic for Powerful Retrieval Methods. In: SIGIR-95 Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1995.
- Marcel Gehrke, Ralf Möller, Tanya Braun: Taming Reasoning in Temporal Probabilistic Relational Models. In ECAI-20 Proceedings of the 24th European Conference on Artificial Intelligence, 2020.
- Manfred Jaeger: Relational Bayesian Networks. In: UAI-97 Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence, 1997.
- Malte Luttermann, Tanya Braun, Ralf Möller, and Marcel Gehrke. Colour Passing Revisited: Lifted Model Construction with Commutative Factors. In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-2024), pages 20500–20507. AAAI Press, 2024
- David Poole: First-order Probabilistic Inference. In: IJCAI-03 Proceedings of the 18th International Joint Conference on Artificial Intelligence, 2003.
- Matthew Richardson and Pedro Domingos: Markov Logic Networks. In: Machine Learning, 62(1-2), 2006.
- Nima Taghipour, Daan Fierens, Jesse Davis, and Hendrik Blockeel: Lifted Variable Elimination: Decoupling the Operators from the Constraint Language. In: Journal of Artificial Intelligence Research, 47(1), 2013
- Taisuke Sato: A Statistical Learning Method for Logic Programs with Distribution Semantics. In: Proceedings of the 12th International Conference on Logic Programming, 1995.
- Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt: Lifted Probabilistic Inference by First-order Knowledge Compilation. In: IJCAI-11 Proceedings of the 22nd International Joint Conference on Artificial Intelligence, 2011.