Automating the Testing of OpenFlow Applications

Abstract

Software-defined networking, and the emergence of OpenFlow-capable switches, enables a wide range of new network functionality. However, enhanced programmability inevitably leads to more software faults (or bugs). We believe that tools for testing OpenFlow programs are critical to the success of the new technology. However, the way OpenFlow applications interact with the data plane raises several challenges. First, the space of possible inputs (e.g., packet headers and inter-packet timings) is huge. Second, the centralized controller has a indirect view of the traffic and experiences unavoidable delays in installing rules in the switches. Third, external factors like user behavior (e.g., mobility) and higher-layer protocols (e.g., the TCP state machine) affect the correctness of OpenFlow programs.

In this work-in-progress paper, we extend techniques for symbolic execution to generate inputs that systematically explore the space of system executions. First, we analyze controller applications to identify equivalence classes of packets that exercise different parts of the code. Second, we propose several network models with increasing precision, ranging from simple traffic models to live testing on the target network. Initial experiences with our prototype, which symbolically executes OpenFlow applications written in Python, suggest that our techniques can help programmers identify bugs in their OpenFlow programs.

Publication
Proceedings of the 1st International Workshop on Rigorous Protocol Engineering (WRiPE'11)
Avatar
Marco Canini
Associate Professor of Computer Science

My current interest is in designing better systems support for AI/ML and provide practical implementations deployable in the real-world.