Treffer: Robustness of Neural Networks Based on MIP Optimization

Title:
Robustness of Neural Networks Based on MIP Optimization
Contributors:
IRT SystemX, Confiance AI
Publisher Information:
HAL CCSD
Publication Year:
2023
Collection:
Archive ouverte HAL (Hyper Article en Ligne, CCSD - Centre pour la Communication Scientifique Directe)
Document Type:
Report report
Language:
English
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edsbas.828FEB22
Database:
BASE

Weitere Informationen

Even though Deep Learning methods have demonstrated their efficiency, they do not currently provide the expected security guarantees. They are known to be vulnerable to adversarial attacks where malicious perturbed inputs lead to erroneous model outputs. The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network models. A possible way to find the minimal optimal perturbation that change the model decision (adversarial attack) is to transform the problem, with the help of binary variables and the classical bigM formulation, into a Mixed Integer Program (MIP). In this paper, we propose a global optimization approach to get the optimal perturbation using a dedicated branch-and-bound algorithm. A specific tree search strategy is built based on greedy forward selection algorithms. We show that each subproblem involved at a given node can be evaluated via a specific convex optimization problem with box constraints and without binary variables, for which an active-set algorithm is used. Our method is more efficient than the generic MIP solver Gurobi and the state-of-the-art method for MIPs such as MIPverify.