Exposing Previously Undetectable Faults in Deep Neural Networks

Abstract

Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test cases that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately-injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.

Publication
Exposing Previously Undetectable Faults in Deep Neural Networks
Isaac Dunn
Isaac Dunn
Century Fellow

Related