Relating Input Concepts to Convolutional Neural Network Decisions

TitleRelating Input Concepts to Convolutional Neural Network Decisions
Publication TypeConference Proceedings
Year of Publication2017
AuthorsXie, N, Sarker, MK, Doran, D, Hitzler, P, Raymer, M
Conference NameNIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017
Date Published12/2017
PublisherNIPS
Conference LocationCA, USA
Abstract

Many current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN’s decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept’s recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.

Projects: