Formal Verification

Generative AI To Be Considered Harmful

This opinion blog was originally written in French. I comissioned a trained translator, Noah Vago, to translate it to english. All discrepancies between the French and the English version are my responsability. ChatGPT’s launch on November 30th, 2022 has had profound consequences. Since OpenAI released this chatbot online, there have been countless laudatory declarations stating “AI” was supposedly able to transform the workplace, make our lives easier and solve an (often never specified) bunch of problems. Scientific top people such as Philippe Aghion, Professor of innovation economics at the Collège de France, hypothesise that, if we adapt our institutions (one can wonder what “adapting” means in this context), AI can be a tool for economic growth. “AI”’s importance turns it into a strategic and unavoidable issue. This could explain the organisation with great ceremony of an “AI” summit last February. Emmanuel Macron invited, among others, Elon Musk, AI’s self-proclaimed big shot, who had a mere two days previous given in to the impulse of doing two Nazi salutes publicly, and who is an outspoken supporter of the AfD, Germany’s far-right party. As top software companies heavily push the integration of “AI” in their programs, making its use unavoidable – as in, there is no alternative. On an anecdotal note, my optician always asks for my opinion on AI and its – actual or supposed – efficiency. Offline role playing narrators – a hobby I enjoy – can quickly obtain plausible illustrations by using Microsoft’s generative AIs.

Formal verification of symbolic and connectionist AI: a way toward higher quality software

AI software – both symbolic and connectionist – pervasiveness in our societies is a fact. Its very dissemination raises important questions from the perspective of software engineering. As AI software malfunctions have dire consequences on individuals and societies, it is expected that AI software will aim for high software quality. The field of formal methods successfully transferred techniques and tools into some of the most critical industries. The goal of this course is to provide an accurate perspective of formal methods applied to AI software. Following real-world industrial examples, we will present how the use of formal methods can help AI developers assess the quality of their software, ranging from adversarial robustness to automated neural network fixing and explainable AI with guarantees.

Caractériser des propriétés de confiance d’IA avec Why3

Nous proposons de faire la démonstration de CAISAR, un logiciel libre en développement au sein du laboratoire de sûreté et sécurité des logiciels du CEA LIST. CAISAR est une plateforme permettant de caractériser la sûreté des logiciels résultant d’un …

IDESSAI 2021

Formal verification of deep neural networks: theory and practice; a tutorial I gave during the joint INRIA-DFKI 2021 summer school.

PFIA 2020

Theory and practice of deep learning verification; a tutorial I gave during the PFIA 2020 conference

CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators

The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples. …