00650nas a2200181 4500008004100000245007600041210006900117260000900186100002200195700002700217700002200244700001400266700002000280700001700300700001900317700002000336856011200356 2021 eng d00aNeuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment0 aNeuroSymbolic Deductive Reasoning for CrossKnowledge Graph Entai bAAAI1 aEbrahimi, Monireh1 aSarker, Md Kamruzzaman1 aBianchi, Federico1 aXie, Ning1 aEberhart, Aaron1 aDoran, Derek1 aKim, HyeongSik1 aHitzler, Pascal uhttps://daselab.cs.ksu.edu/publications/neuro-symbolic-deductive-reasoning-cross-knowledge-graph-entailment00988nas a2200181 4500008004100000245008300041210006900124250000700193260002400200520041000224653002800634100002700662700001400689700001700703700002000720700002000740856004600760 2017 eng d00aExplaining Trained Neural Networks with Semantic Web Technologies: First Steps0 aExplaining Trained Neural Networks with Semantic Web Technologie a12 aLondon, UKc07/20173 a
The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.
10aArtificial Intelligence1 aSarker, Md Kamruzzaman1 aXie, Ning1 aDoran, Derek1 aRaymer, Michael1 aHitzler, Pascal uhttp://daselab.cs.wright.edu/nesy/NeSy17/01637nas a2200157 4500008004100000245007000041210006900111260002700180520106700207100001401274700002701288700001701315700002001332700002001352856010701372 2017 eng d00aRelating Input Concepts to Convolutional Neural Network Decisions0 aRelating Input Concepts to Convolutional Neural Network Decision aCA, USAbNIPSc12/20173 aMany current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN’s decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept’s recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.
1 aXie, Ning1 aSarker, Md Kamruzzaman1 aDoran, Derek1 aHitzler, Pascal1 aRaymer, Michael uhttps://daselab.cs.ksu.edu/publications/relating-input-concepts-convolutional-neural-network-decisions