<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Hidden Neuron Activation Analysis on Labeled Text Data</style></title><secondary-title><style face="normal" font="default" size="100%">K-CAP '25: Knowledge Capture Conference 2025</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Concept-based Explanation</style></keyword><keyword><style  face="normal" font="default" size="100%">Dense Layer Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword><keyword><style  face="normal" font="default" size="100%">Hidden Neuron Analysis</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2025</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2025</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">USA</style></pub-location><pages><style face="normal" font="default" size="100%">206 - 210</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Understanding the internal mechanisms of deep neural networks remains a central challenge in the field of Explainable Artificial Intelligence (XAI). With the rapid advancement of neural architectures in natural language processing (NLP), analyzing the role of hidden neurons in capturing and processing linguistic features has become increasingly important. This study investigates Hidden Neuron Activation Analysis on labeled text data to reveal how individual neurons contribute to a model’s decision-making process. We propose a model-agnostic explainability framework for text classifiers that identifies concepts activating specific neurons involved in classification. An LSTM-based network is trained on the AG News topic classification dataset, comprising four distinct classes, and the final Dense layer with 64 neurons was analyzed. In addition, statistical analyses like the Mann-Whitney U Test is conducted to assess the robustness and reliability of the system. The statistical analysis shows that, concepts plays important role in the decision making process of neural network. Our findings enhance interpretability in NLP models and offer a foundation for optimizing neural architectures in text classification tasks.&lt;/p&gt;
</style></abstract></record></records></xml>