<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Monireh Ebrahimi</style></author><author><style face="normal" font="default" size="100%">Md Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Federico Bianchi</style></author><author><style face="normal" font="default" size="100%">Ning Xie</style></author><author><style face="normal" font="default" size="100%">Aaron Eberhart</style></author><author><style face="normal" font="default" size="100%">Derek Doran</style></author><author><style face="normal" font="default" size="100%">HyeongSik Kim</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment</style></title><secondary-title><style face="normal" font="default" size="100%">AAAI-MAKE 2021</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><publisher><style face="normal" font="default" size="100%">AAAI</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Md Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Ning Xie</style></author><author><style face="normal" font="default" size="100%">Derek Doran</style></author><author><style face="normal" font="default" size="100%">Michael Raymer</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Explaining Trained Neural Networks with Semantic Web Technologies: First Steps</style></title><secondary-title><style face="normal" font="default" size="100%">Twelveth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Artificial Intelligence</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">07/2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://daselab.cs.wright.edu/nesy/NeSy17/</style></url></web-urls></urls><edition><style face="normal" font="default" size="100%">12</style></edition><pub-location><style face="normal" font="default" size="100%">London, UK</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ning Xie</style></author><author><style face="normal" font="default" size="100%">Md Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Derek Doran</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Michael Raymer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Relating Input Concepts to Convolutional Neural Network Decisions</style></title><secondary-title><style face="normal" font="default" size="100%">NIPS 2017 Workshop: Interpreting, Explaining and Visualizing Deep Learning, NIPS IEVDL 2017</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2017</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">NIPS</style></publisher><pub-location><style face="normal" font="default" size="100%">CA, USA</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Many current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN’s decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept’s recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.&lt;/p&gt;
</style></abstract></record></records></xml>