<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Moumita Sen Sarma</style></author><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Samatha E. Akkamahadevi</style></author><author><style face="normal" font="default" size="100%">Eugene Y. Vasserman</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Neurosymbolic Hidden Neuron Analysis in Convolutional Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">Neuro-Symbolic AI: Bridging the Gap Between Neural Networks and Symbolic Reasoning</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><publisher><style face="normal" font="default" size="100%">Elsevier</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p class=&quot;rtejustify&quot;&gt;This tutorial introduces a step-by-step, deductive pipeline for making the inner workings of neural networks more transparent by assigning human-understandable concepts to hidden neuron activations. The approach automatically maps neuron behavior to symbolic concepts drawn from structured knowledge sources and attaches an error margin to each label, providing a measure of confidence in its precision. While demonstrated in detail on the ADE20k scene dataset---including single-concept neurons, multiple neurons contributing to the same concept, and multi-concept neurons---the method is also applied to the SUN2012 dataset and adapted for a text classification task, highlighting its generalizability across modalities. The chapter is designed to be practical and educational, focusing on a replicable methodology that readers can adapt to varied applications. Through worked examples, visualizations, and evaluation strategies, the tutorial offers a clear, reusable framework for concept-based neuron analysis in both vision and language models.&lt;/p&gt;
</style></abstract><section><style face="normal" font="default" size="100%">8</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Moumita Sen Sarma</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Reasoning in Large Language Models: RAG and Beyond</style></title><secondary-title><style face="normal" font="default" size="100%">Neuro-Symbolic AI: Integrating Neural Networks and Symbolic Reasoning</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><publisher><style face="normal" font="default" size="100%">Elsevier</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;This chapter presents a comprehensive overview of contemporary approaches that integrate neural networks and large language models (LLMs) with classical symbolic reasoning. We review the evolution of methods designed to embed logical inference within neural architectures and explore recent advances in prompting strategies, hybrid reasoning frameworks, retrieval-augmented learning, and reinforcement-based reasoning optimization methods. We discuss the symbolic foundations of logical reasoning and then analyze how neural and LLM-based methods have progressively evolved to emulate, extend, and optimize symbolic reasoning across diverse tasks. Finally, we explore emerging neurosymbolic paradigms that unify neural and symbolic reasoning to achieve interpretable, scalable, and generalizable intelligence. Our analysis underscores the growing importance of neurosymbolic AI as a foundational direction for developing reliable and explainable reasoning systems.&lt;/p&gt;
</style></abstract><section><style face="normal" font="default" size="100%">5</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Rushrukh Rayan</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Samatha Ereshi Akkamahadevi</style></author><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Cara Widmer</style></author><author><style face="normal" font="default" size="100%">Eugene Y Vasserman</style></author><author><style face="normal" font="default" size="100%">Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Toward a Neurosymbolic Understanding of Hidden Neuron Activations</style></title><secondary-title><style face="normal" font="default" size="100%">Neurosymbolic Artificial Intelligence</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><volume><style face="normal" font="default" size="100%">2</style></volume><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Hidden Neuron Activation Analysis on Labeled Text Data</style></title><secondary-title><style face="normal" font="default" size="100%">K-CAP '25: Knowledge Capture Conference 2025</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Concept-based Explanation</style></keyword><keyword><style  face="normal" font="default" size="100%">Dense Layer Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword><keyword><style  face="normal" font="default" size="100%">Hidden Neuron Analysis</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2025</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2025</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">USA</style></pub-location><pages><style face="normal" font="default" size="100%">206 - 210</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Understanding the internal mechanisms of deep neural networks remains a central challenge in the field of Explainable Artificial Intelligence (XAI). With the rapid advancement of neural architectures in natural language processing (NLP), analyzing the role of hidden neurons in capturing and processing linguistic features has become increasingly important. This study investigates Hidden Neuron Activation Analysis on labeled text data to reveal how individual neurons contribute to a model’s decision-making process. We propose a model-agnostic explainability framework for text classifiers that identifies concepts activating specific neurons involved in classification. An LSTM-based network is trained on the AG News topic classification dataset, comprising four distinct classes, and the final Dense layer with 64 neurons was analyzed. In addition, statistical analyses like the Mann-Whitney U Test is conducted to assess the robustness and reliability of the system. The statistical analysis shows that, concepts plays important role in the decision making process of neural network. Our findings enhance interpretability in NLP models and offer a foundation for optimizing neural architectures in text classification tasks.&lt;/p&gt;
</style></abstract></record></records></xml>