Robust training and verification of implicit neural networks: a non-Euclidean contractive approach

S. Jafarpour, A. Davydov, M. Abate, F. Bullo, S. Coogan
ICML workshop on Formal Verification of Machine Learning (WFVML 2022), 2022

Abstract

This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks based upon non-Euclidean contraction theory. The basic idea is to cast the robustness analysis of a neural network as a reachability problem and use (i) the l-infinity-norm input-output Lipschitz constant and (ii) the tight inclusion function of the network to over-approximate its reachable sets. First, for a given implicit neural network, we use l-infinity-matrix measures to propose sufficient conditions for its well-posedness, design an iterative algorithm to compute its fixed points, and provide upper bounds for its l-infinity-norm input-output Lipschitz constant. Second, we introduce a related embedded network and show that the embedded network can be used to provide an l-infinity-norm box over-approximation of the reachable sets of the original network. Moreover, we use the embedded network to design an iterative algorithm for computing the upper bounds of the original system's tight inclusion function. Third, we use the upper bounds of the Lipschitz constants and the upper bounds of the tight inclusion functions to design two algorithms for the training and robustness verification of implicit neural networks. Finally, we apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.