Comparative analysis of interval reachability for robust implicit and feedforward neural networks

A. Davydov, Saber Jafarpour, Matthew Abate, F. Bullo, S. Coogan
IEEE Conference on Decision and Control, 2022


Implicit neural networks (INNs) are a class of learning models that use implicit algebraic equations as layers and have been shown to exhibit several notable benefits over traditional feedforward neural networks (FFNNs). In this paper, we use interval reachability analysis to study robustness of INNs and compare them with FFNNs. We first introduce the notion of tight inclusion function and use it to provide the tightest rectangular over-approximation of the neural network's input- output map. We also show that tight inclusion functions lead to sharper robustness guarantees than the well-studied robustness measures of Lipschitz constants. Like exact Lipschitz constants, tight inclusions functions are computationally challenging to obtain, and thus we develop a framework based upon mixed monotonicity and contraction theory to estimate the tight inclusion functions for INNs. We show that our approach performs at least as well as, and generally better than, state-of- the-art interval-bound propagation methods for INNs. Finally, we design a novel optimization problem for training robust INNs and we provide empirical evidence that suitably-trained INNs can be more robust than comparably-trained FFNNs.