This tutorial will explore the promises and shortcomings of the field of explainable AI. Through the lens of post-hoc explanation methods and interpretable-by-design models, attendees will learn which answers to expect when dealing with explainable AI techniques.
Formal verification of deep neural networks: theory and practice; a tutorial I gave during the joint INRIA-DFKI 2021 summer school.
La grande polyvalence et les résultats impressionnants des réseaux de neurones modernes viennent en partie de leur non-linéarité. Cette propriété fondamentale rend malheureusement très difficile leur vérification formelle, et ce, même si on se …
Theory and practice of deep learning verification; a tutorial I gave during the PFIA 2020 conference
The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples. …
Some tutorial materials I gave with my advisor Guillaume Charpiat during DigiCOSME Spring school ForMaL
To provide daily-life assistance appropriately by a service robot, the management of houseware's information in a room or a house is an indispensable function. Especially, the information about what and where objects are in the environment are …