Artificial Intelligence

Generative AI To Be Considered Harmful

This opinion blog was originally written in French. I comissioned a trained translator, Noah Vago, to translate it to english. All discrepancies between the French and the English version are my responsability. ChatGPT’s launch on November 30th, 2022 has had profound consequences. Since OpenAI released this chatbot online, there have been countless laudatory declarations stating “AI” was supposedly able to transform the workplace, make our lives easier and solve an (often never specified) bunch of problems. Scientific top people such as Philippe Aghion, Professor of innovation economics at the Collège de France, hypothesise that, if we adapt our institutions (one can wonder what “adapting” means in this context), AI can be a tool for economic growth. “AI”’s importance turns it into a strategic and unavoidable issue. This could explain the organisation with great ceremony of an “AI” summit last February. Emmanuel Macron invited, among others, Elon Musk, AI’s self-proclaimed big shot, who had a mere two days previous given in to the impulse of doing two Nazi salutes publicly, and who is an outspoken supporter of the AfD, Germany’s far-right party. As top software companies heavily push the integration of “AI” in their programs, making its use unavoidable – as in, there is no alternative. On an anecdotal note, my optician always asks for my opinion on AI and its – actual or supposed – efficiency. Offline role playing narrators – a hobby I enjoy – can quickly obtain plausible illustrations by using Microsoft’s generative AIs.

Formal verification of symbolic and connectionist AI: a way toward higher quality software

AI software – both symbolic and connectionist – pervasiveness in our societies is a fact. Its very dissemination raises important questions from the perspective of software engineering. As AI software malfunctions have dire consequences on individuals and societies, it is expected that AI software will aim for high software quality. The field of formal methods successfully transferred techniques and tools into some of the most critical industries. The goal of this course is to provide an accurate perspective of formal methods applied to AI software. Following real-world industrial examples, we will present how the use of formal methods can help AI developers assess the quality of their software, ranging from adversarial robustness to automated neural network fixing and explainable AI with guarantees.

Caractériser des propriétés de confiance d’IA avec Why3

Nous proposons de faire la démonstration de CAISAR, un logiciel libre en développement au sein du laboratoire de sûreté et sécurité des logiciels du CEA LIST. CAISAR est une plateforme permettant de caractériser la sûreté des logiciels résultant d’un …

CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive …

Frailties of AI - Paris Saclay AI Master

This is the material for a course I gave at the Université Paris-Saclay AI master course for Fairness in Artificial Intelligence. Altough the course was online, interactions between students, professors and me were really interesting and sparked new questions, especially on the question of dataset quality assessment.

Theory and Practice of Formal Verification of Machine Learning

Since 2020, I regularly teach formal verification of machine learning programs to students of the SETI master.