CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

Abstract

We present CAISAR, an open-source platform under active development for the characterization of AI systems’ robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive language of the Why3 verification platform. Moreover, CAISAR orchestrates and composes state-of-the-art machine learning verification tools which, individually, are not able to efficiently handle all problems but, collectively, can cover a growing number of properties. Our aim is to assist, on the one hand, the V&V process by reducing the burden of choosing the methodology tailored to a given verification problem, and on the other hand the tools developers by factorizing useful features-visualization, report generation, property description-in one platform. CAISAR will soon be available at https://git.frama-c.com/pub/caisar.

Publication
CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

This paper will appear in the proceedings of the AISafety workshop, colocated this year with the IJCAI-ECAI 2022 conference.

Researcher on Trustworthy Artificial Intelligence