Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents

Abstract

Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.

Publication
Proceedings of NetAI'19
Avatar
Arnaud Dethise
PhD Student

PhD 2023.