%0 Conference Paper %B AAAI-MAKE 2021 %D 2021 %T Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment %A Monireh Ebrahimi %A Md Kamruzzaman Sarker %A Federico Bianchi %A Ning Xie %A Aaron Eberhart %A Derek Doran %A HyeongSik Kim %A Pascal Hitzler %B AAAI-MAKE 2021 %I AAAI %G eng %0 Conference Proceedings %B Twelveth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy %D 2017 %T Explaining Trained Neural Networks with Semantic Web Technologies: First Steps %A Md Kamruzzaman Sarker %A Ning Xie %A Derek Doran %A Michael Raymer %A Pascal Hitzler %K Artificial Intelligence %X

The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.

%B Twelveth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy %7 12 %C London, UK %8 07/2017 %G eng %U http://daselab.cs.wright.edu/nesy/NeSy17/ %0 Conference Proceedings %B NIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017 %D 2017 %T Relating Input Concepts to Convolutional Neural Network Decisions %A Ning Xie %A Md Kamruzzaman Sarker %A Derek Doran %A Pascal Hitzler %A Michael Raymer %X

Many current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN’s decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept’s recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.

%B NIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017 %I NIPS %C CA, USA %8 12/2017 %G eng