Learning, Probabilities and Causality

Credits

6 ECTS

Track

M2 MSIAM (DS)

Instructors

Xavier Alameda-Pineda, Karim Assaad, Emilie Devijver, Eric Gaussier, Thomas Hueber

Objectives

The main aim of this course is to provide the principles and tools to understand and master learning models based on probabilities and causality.

Description

Causality is at the core of our vision of the world and of the way we reason. It has long been recognized as an important concept and was already mentioned in the ancient Hindu scriptures: “Cause is the effect concealed, effect is the cause revealed”. Even Democritus famously proclaimed that he would rather discover a causal relation than be the king of presumably the wealthiest empire of his time. Nowadays, causality is seen as an ideal way to explain observed phenomena and to provide tools to reason on possible outcomes of interventions and what-if experiments, which are central to counterfactual reasoning, as ‘‘what if this patient had been given this particular treatment?’’

Course Outline

Probabilistic Learning

In this part of the course we will study various probabilistic models assuming that the causality relationships between random variables are given. We will focus on unsupervised probabilistic models, from classical Gaussian mixtures to more recent variational techniques including diffusion models. A non-exhaustive list of models discussed in class is:

  • Gaussian mixture models
  • Hidden Markov models
  • Probabilistic principal component analysis
  • Linear dynamical systems (i.e. Kalman filter)
  • Variational autoencoders, and their dynamical counterpart.
  • Normalising flows and diffusion models.

Causal Learning

Causality is at the core of our vision of the world and of the way we reason. It has long been recognized as an important concept and was already mentioned in the ancient Hindu scriptures: “Cause is the effect concealed, effect is the cause revealed”. Even Democritus famously proclaimed that he would rather discover a causal relation than be the king of presumably the wealthiest empire of his time. Nowadays, causality is seen as an ideal way to explain observed phenomena and to provide tools to reason on possible outcomes of interventions and what-if experiments, which are central to counterfactual reasoning, as ‘‘what if this patient had been given this particular treatment?’’. In this lecture, we will provide an overview of causality, from its first definitions centuries ago to its modern usage in machine learning and reasoning. In particular, we will answer the following questions:

  • How to represent causal relations through structural causal graphs?
  • How to infer causal relations from purely observational data, from purely interventional data and from a mixture of them?
  • How to exploit and reason upon causal knowledge? In particular, can one quantify the relation between a cause and its effect? Can one compute the effect of an intervention? Can one use causal knowledge for counterfactual reasoning or mediation analysis? Theoretical and practical work The course will be divided into lectures and practical sessions aiming to better understand the different notions introduced. The concepts behind causality are not too difficult to grasp but nevertheless differ from traditional probability concepts.

Prerequisites

Probability and statistics background.

Keywords

Objective

Description

Selected references

Pattern Recognition and Machine Learning, by C. Bishop, 2005. [link].

An introduction to variational autoencoders, by D. P. Kingma and M. Welling [link].

The matrix cook book.

Dynamical Variational Autoencoders: A Comprehensive Review, by Laurent Girin et. al. 2021

The Book of Why: The New Science of Cause and Effect, by Pearl and Mackenzie, 2018

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, by Pearl, 1988

Causation, Prediction, and Search, by Spirtes, Glamour and Scheines, 2000

Elements of Causal Inference: Foundations and Learning Algorithms, by Peters, Janzing and Scholkopf, 2017

Causality: Models, Reasoning and Inference, by Pearl, 2009