Analyzing Learning-Based Networked Systems with Formal Verification

Abstract

As more applications of (deep) neural networks emerge in the computer networking domain, the correctness and predictability of a neural agent’s behavior for corner case inputs are becoming crucial. Enabling the formal analysis of agents with nontrivial properties, we bridge between specifying intended high-level behavior and expressing low-level statements directly encoded into an efficient verification framework. Our results support that within minutes, one can establish the resilience of a neural network to adversarial attacks on its inputs, as well as formally prove properties that were previously relying on educated guesses. Finally, we also show how formal verification can help create an accurate visual representation of an agent behavior to perform visual inspection and improve its trustworthiness.

Publication
Proceedings of INFOCOM'21
Avatar
Arnaud Dethise
Alumni

PhD 2023.