TY - CONF T1 - Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment T2 - AAAI-MAKE 2021 Y1 - 2021 A1 - Monireh Ebrahimi A1 - Md Kamruzzaman Sarker A1 - Federico Bianchi A1 - Ning Xie A1 - Aaron Eberhart A1 - Derek Doran A1 - HyeongSik Kim A1 - Pascal Hitzler JF - AAAI-MAKE 2021 PB - AAAI ER - TY - Generic T1 - Explaining Trained Neural Networks with Semantic Web Technologies: First Steps T2 - Twelveth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy Y1 - 2017 A1 - Md Kamruzzaman Sarker A1 - Ning Xie A1 - Derek Doran A1 - Michael Raymer A1 - Pascal Hitzler KW - Artificial Intelligence AB -

The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.

JF - Twelveth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy CY - London, UK UR - http://daselab.cs.wright.edu/nesy/NeSy17/ ER - TY - Generic T1 - Relating Input Concepts to Convolutional Neural Network Decisions T2 - NIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017 Y1 - 2017 A1 - Ning Xie A1 - Md Kamruzzaman Sarker A1 - Derek Doran A1 - Pascal Hitzler A1 - Michael Raymer AB -

Many current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN’s decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept’s recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.

JF - NIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017 PB - NIPS CY - CA, USA ER -