<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilehka Dalal</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Understanding CNN Hidden Neuron Activations using Concept Induction over Background Knowledge</style></title><secondary-title><style face="normal" font="default" size="100%">THE 23RD INTERNATIONAL SEMANTIC WEB CONFERENCE, ISWC 2024</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Concept Induction</style></keyword><keyword><style  face="normal" font="default" size="100%">Convolutional Neural Network</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword><keyword><style  face="normal" font="default" size="100%">Knowledge Graph</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2024</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;A major challenge in Explainable AI is interpreting hidden neuron activations accurately. These in- terpretations can reveal what a deep learning system perceives as relevant in the input data, thereby addressing the black-box nature of such systems. The state of the art indicates that hidden node acti- vations can be interpretable by humans, but there’s a lack of systematic automated methods to verify these interpretations, especially those that utilize substantial background knowledge and inherently explainable methods. In this proposal, we introduce a novel model-agnostic post-hoc Explainable AI method based on a Wikipedia-derived concept hierarchy with approximately 2 million classes. Our approach utilizes OWL-reasoning-based Concept Induction for explanation generation and compares with off-the-shelf pre-trained multimodal-based explainable methods. Our results demonstrate that our method automatically provides meaningful class expressions as explanations to individual neurons in the dense layer of a Convolutional Neural Network, outperforming prior work in both quantitative and qualitative aspects.&lt;/p&gt;
</style></abstract></record></records></xml>