I am a research engineer at CEA LIST working on software safety applied to artificial intelligence. I completed my PhD thesis in november 2021, under the supervision of Guillaume Charpiat, Zakaria Chihani and Marc Schoenauer.
My research interests include abstract interpretation and symbolic propagation, SAT/SMT solving applied to deep neural networks. I am also interested in the formulation and formalization of specifications for programs using audio and visual inputs. I am involved in several projects involving industrial partners on the topic of safe AI.
You may download my CV here.
Open positions on Trustworthy AI (interns, PhD, fixed-term contracts) at CEA LIST. Send me a message for more infos!
PhD in Computer Science, 2018-2021
Université Paris-Saclay
« Diplôme d'ingénieur » (MEng french equivalent), 2014-2018
ENSTA Paris
« Classes préparatoires », 2012-2014
Lycée Massena, Nice
I am proficient in Python and usual scientific libraries (numpy, scikit, pytorch), as well as Jupyter Notebook deployment
I am proficient in OCaml, and base understanding of Propositions as Types
I like to teach and learn: I have multiple teaching experience and I enjoy learning, be it on my research topics or something else
I am able to blitz a good meal usually vegetarian; I have a strong preference for mediterranean cuisine, but I also like to cook meals inspired by cambodian and japanese cuisine as well
I like to write and play tabletop RPG stories, and lose my mind watching players inventing completly different stories
I delved a bit too much into self-hosting arcanes, and I maintain this website along with other services for me or my relatives.
We present CAISAR, an open-source platform under active development for the characterization of AI systems’ robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive language of the Why3 verification platform. Moreover, CAISAR orchestrates and composes state-of-the-art machine learning verification tools which, individually, are not able to efficiently handle all problems but, collectively, can cover a growing number of properties. Our aim is to assist, on the one hand, the V&V process by reducing the burden of choosing the methodology tailored to a given verification problem, and on the other hand the tools developers by factorizing useful features-visualization, report generation, property description-in one platform. CAISAR will soon be available at https://git.frama-c.com/pub/caisar.
The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples. However, other works have proved global properties of smaller neural networks. Yet, formally verifying perception remains uncharted. This is due notably to the lack of relevant properties to verify, as the distribution of possible inputs cannot be formally specified. We propose to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry. Our formulation allows us to formally express and verify safety properties on perception units, covering all cases that could ever be generated by the simulator, to the difference of statistical tests which cover only seen examples. Along with this theoretical formulation , we provide a tool to translate deep learning models into standard logical formulae. As a proof of concept, we train a toy example mimicking an autonomous car perceptive unit, and we formally verify that it will never fail to capture the relevant information in the provided inputs.