Machine Learning techniques, Neural Networks in particular, are going through an impressive expansion, permeating various domains, becoming the next frontier for human societies. Aircraft collision avoidance, cancer detection, justice advisors, autonomous vehicles, or mooring line failure detection are but a few examples of Neural Networks applications. This effervescence, however, may hold more than benefits, as it slowly but surely reaches critical systems. Indeed, the remarkable efficiency of neural nets comes at a price, more and more underlined by the scientific consensus: weakness to environmental or adversarial perturbations, unpredictability… which prevents their full-scale integration into critical systems. While the domain of critical software enjoys a plethora of methods that help verify and validate software (abstract interpretation, model checking, simulation, bounded tests…), these methods are generally useless when it comes to Neural Nets. This thesis aims at bridging formal software verification and machine learning, in order to bring trust in critical systems incorporating Neural Networks elements. We first study the exact causes that prevent a straightforward application of existing verification techniques on Neural Nets. We state that those issues are threefold: the lack of formal specification on the inputs, the combinatorial explosion caused by the piecewise linear structure of Neural Nets and the lack of a representation common to Neural Nets and Formal Verification. To tackle those issues, we present CAMUS, a theoretical framework allowing the specification of verification problems on perceptual inputs using simulators. We exploit the piecewise linear structure of neural networks on DISCO, an algorithm of parallel verification, to mitigate the combinatorial explosion. We implement these contributions into ISAIEH, a prototypal platform for neural network encoding and verification.