<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Rui Zhu</style></author><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Shirly Stephen</style></author><author><style face="normal" font="default" size="100%">Colby K. Fisher</style></author><author><style face="normal" font="default" size="100%">Thomas Thelen</style></author><author><style face="normal" font="default" size="100%">Kitty Currier</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Mark Schildhauer</style></author><author><style face="normal" font="default" size="100%">Wenwen Li</style></author><author><style face="normal" font="default" size="100%">Dean Rehberger</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Antrea Christou</style></author><author><style face="normal" font="default" size="100%">Ling Cai</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Anthony D'Onofrio</style></author><author><style face="normal" font="default" size="100%">Andrew Eells</style></author><author><style face="normal" font="default" size="100%">Mitchell Faulk</style></author><author><style face="normal" font="default" size="100%">Zilong Liu</style></author><author><style face="normal" font="default" size="100%">Gengchen Mai</style></author><author><style face="normal" font="default" size="100%">Mohammad Saeid Mahdavinejad</style></author><author><style face="normal" font="default" size="100%">Bryce D. Mecum</style></author><author><style face="normal" font="default" size="100%">Sanaz Saki Norouzi</style></author><author><style face="normal" font="default" size="100%">Meilin Shi</style></author><author><style face="normal" font="default" size="100%">Yuanyuan Tian</style></author><author><style face="normal" font="default" size="100%">Sizhe Wang</style></author><author><style face="normal" font="default" size="100%">Zhangyu Wang</style></author><author><style face="normal" font="default" size="100%">Joseph Zalewski</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The KnowWhereGraph: A Large-Scale Geo-Knowledge Graph for Interdisciplinary Knowledge Discovery and Geo-Enrichment</style></title><secondary-title><style face="normal" font="default" size="100%">Transactions in GIS </style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://doi.org/10.48550/arXiv.2502.13874</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">30</style></volume><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Moumita Sen Sarma</style></author><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Samatha E. Akkamahadevi</style></author><author><style face="normal" font="default" size="100%">Eugene Y. Vasserman</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Neurosymbolic Hidden Neuron Analysis in Convolutional Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">Neuro-Symbolic AI: Bridging the Gap Between Neural Networks and Symbolic Reasoning</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><publisher><style face="normal" font="default" size="100%">Elsevier</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p class=&quot;rtejustify&quot;&gt;This tutorial introduces a step-by-step, deductive pipeline for making the inner workings of neural networks more transparent by assigning human-understandable concepts to hidden neuron activations. The approach automatically maps neuron behavior to symbolic concepts drawn from structured knowledge sources and attaches an error margin to each label, providing a measure of confidence in its precision. While demonstrated in detail on the ADE20k scene dataset---including single-concept neurons, multiple neurons contributing to the same concept, and multi-concept neurons---the method is also applied to the SUN2012 dataset and adapted for a text classification task, highlighting its generalizability across modalities. The chapter is designed to be practical and educational, focusing on a replicable methodology that readers can adapt to varied applications. Through worked examples, visualizations, and evaluation strategies, the tutorial offers a clear, reusable framework for concept-based neuron analysis in both vision and language models.&lt;/p&gt;
</style></abstract><section><style face="normal" font="default" size="100%">8</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Rushrukh Rayan</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Samatha Ereshi Akkamahadevi</style></author><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Cara Widmer</style></author><author><style face="normal" font="default" size="100%">Eugene Y Vasserman</style></author><author><style face="normal" font="default" size="100%">Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Toward a Neurosymbolic Understanding of Hidden Neuron Activations</style></title><secondary-title><style face="normal" font="default" size="100%">Neurosymbolic Artificial Intelligence</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2026</style></year></dates><volume><style face="normal" font="default" size="100%">2</style></volume><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avishek Das</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Hidden Neuron Activation Analysis on Labeled Text Data</style></title><secondary-title><style face="normal" font="default" size="100%">K-CAP '25: Knowledge Capture Conference 2025</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Concept-based Explanation</style></keyword><keyword><style  face="normal" font="default" size="100%">Dense Layer Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword><keyword><style  face="normal" font="default" size="100%">Hidden Neuron Analysis</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2025</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2025</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">USA</style></pub-location><pages><style face="normal" font="default" size="100%">206 - 210</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Understanding the internal mechanisms of deep neural networks remains a central challenge in the field of Explainable Artificial Intelligence (XAI). With the rapid advancement of neural architectures in natural language processing (NLP), analyzing the role of hidden neurons in capturing and processing linguistic features has become increasingly important. This study investigates Hidden Neuron Activation Analysis on labeled text data to reveal how individual neurons contribute to a model’s decision-making process. We propose a model-agnostic explainability framework for text classifiers that identifies concepts activating specific neurons involved in classification. An LSTM-based network is trained on the AG News topic classification dataset, comprising four distinct classes, and the final Dense layer with 64 neurons was analyzed. In addition, statistical analyses like the Mann-Whitney U Test is conducted to assess the robustness and reliability of the system. The statistical analysis shows that, concepts plays important role in the decision making process of neural network. Our findings enhance interpretability in NLP models and offer a foundation for optimizing neural architectures in text classification tasks.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Samatha Ereshi Akkamahadevi</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automating CNN Neuron Interpretation using Concept Induction</style></title><secondary-title><style face="normal" font="default" size="100%">THE 23RD INTERNATIONAL SEMANTIC WEB CONFERENCE, ISWC 2024</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Automation in AI</style></keyword><keyword><style  face="normal" font="default" size="100%">Deep Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable Artificial Intelligence</style></keyword><keyword><style  face="normal" font="default" size="100%">Knowledge Graph</style></keyword><keyword><style  face="normal" font="default" size="100%">Semantic Web</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2024</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;This paper presents an automation pipeline for interpreting hidden neuron activations in Convolutional Neural Networks (CNNs), a crucial objective of Explainable AI (XAI). Previously, our research group addressed this objective by employing concept induction and semantic reasoning using a concept hierarchy derived from the Wikipedia knowledge graph. However, the process was executed manually, taking several days to complete. In this study, we have fully automated the workflow, achieving consistent results while significantly reducing the execution time. The automation pipeline streamlines model training, data preparation, concept induction, image retrieval, classification, and statistical validation, thereby completely eliminating the manual intervention. This automation enables us to efficiently interpret and validate CNN neuron activations by modifying parameters, such as incorporating a broader range of training images and classes and examining additional concept induction results across various neuron layers using different analytical tools.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Rushrukh Rayan</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Error-margin Analysis for Hidden Neuron Activation Labels</style></title><secondary-title><style face="normal" font="default" size="100%">18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">CNN</style></keyword><keyword><style  face="normal" font="default" size="100%">Concept Induction</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2024</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer </style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Understanding how high-level concepts are represented with- in artificial neural networks is a fundamental challenge in the field of arti- ficial intelligence. While existing literature in explainable AI emphasizes the importance of labeling neurons with concepts to understand their functioning, they mostly focus on identifying what stimulus activates a neuron in most cases; this corresponds to the notion of recall in informa- tion retrieval. We argue that this is only the first-part of a two-part job; it is imperative to also investigate neuron responses to other stimuli, i.e., their precision. We call this the neuron label’s error margin.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>32</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Understanding Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning</style></title><secondary-title><style face="normal" font="default" size="100%">Department of Computer Science</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2024</style></year></dates><volume><style face="normal" font="default" size="100%">PhD</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;A central challenge in Explainable AI (XAI) is accurately interpreting hidden neuron activations in deep neural networks (DNNs). Accurate interpretations help demystify the black-box nature of deep learning models by explaining what the system internally detects as relevant in the input. While some existing methods show that hidden neuron activations can be human-interpretable, systematic and automated approaches leveraging background knowledge remain underexplored. This thesis introduces a novel model-agnostic post-hoc XAI method that integrates a Wikipedia-derived concept hierarchy of approximately 2 million classes as background knowledge and employs OWL-reasoning-based Concept Induction to generate explanations. Our approach automatically assigns meaningful class expressions to neurons in the dense layers of Convolutional Neural Networks, outperforming prior methods both quantitatively and qualitatively.&lt;/p&gt;

&lt;p&gt;In addition, we argue that understanding neuron behavior requires not only identifying what activates a neuron (recall) but also examining its precision—how it responds to other stimuli, which we define as the neuron's error margin, enhancing the granularity of neuron interpretation.&lt;/p&gt;

&lt;p&gt;To visualize these findings, we present ConceptLens, an innovative tool that visualizes neuron activations and error margins. ConceptLens offers insights into the confidence levels of neuron activations and enables an intuitive understanding of neuron behavior through visual bar charts. Together, these contributions offer a holistic approach to interpreting DNNs, advancing the explainability and transparency of AI models.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Rushrukh Rayan</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Eugene Y. Vasserman</style></author><author><style face="normal" font="default" size="100%">Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis</style></title><secondary-title><style face="normal" font="default" size="100%">18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">CNN</style></keyword><keyword><style  face="normal" font="default" size="100%">Concept Induction</style></keyword><keyword><style  face="normal" font="default" size="100%">Explainable AI</style></keyword><keyword><style  face="normal" font="default" size="100%">LLM</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2024</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer </style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;We introduce a novel model-agnostic post-hoc Explainable AI method that provides meaningful interpretations for hidden neuron activations in a Convolutional Neural Network. Our approach uses a Wikipedia-derived concept hierarchy with approx. 2 million classes as background knowledge, and deductive reasoning based Concept Induc- tion for explanation generation. Additionally, we explore and compare the capabilities of off-the-shelf pre-trained multimodal-based explainable methods. Our evaluation shows that our neurosymbolic method holds a competitive edge in both quantitative and qualitative aspects.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>27</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Shirly Stephen</style></author><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Karthik Soman</style></author><author><style face="normal" font="default" size="100%">Peter W Rose</style></author><author><style face="normal" font="default" size="100%">John H Morris</style></author><author><style face="normal" font="default" size="100%">Sergio E Baranzini</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Antrea Christou</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Kitty Currier</style></author><author><style face="normal" font="default" size="100%">Mark Schildhauer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Bridging RDF and Property Graphs: Linking KnowWhereGraph and SPOKE</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>27</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Md Kamruzzaman Sarker</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Explaining Deep Learning Hidden Neuron Activations using Concept Induction</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons. It seems evident that accurate interpretations thereof would provide insights into the question what a deep learning system has internally detected as relevant on the input, thus lifting some of the black box character of deep learning systems.&lt;/p&gt;

&lt;p&gt;The state of the art on this front indicates that hidden node activations appear to be interpretable in a way that makes sense to humans, at least in some cases. Yet, systematic automated methods that would be able to first hypothesize an interpretation of hidden neuron activations, and then verify it, are mostly missing.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. It is based on using large-scale background knowledge -- a class hierarchy of approx. 2 million classes curated from the Wikipedia Concept Hierarchy -- together with a symbolic reasoning approach called concept induction&amp;nbsp;based on description logics that was originally developed for applications in the Semantic Web field.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>27</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Shirly Stephen</style></author><author><style face="normal" font="default" size="100%">Kitty Currier</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Rui Zhu</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Mark Schildhauer</style></author><author><style face="normal" font="default" size="100%">Mohammad Saeid Mahdavinejad</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Ling Cai</style></author><author><style face="normal" font="default" size="100%">Gengchen Mai</style></author><author><style face="normal" font="default" size="100%">Zhangyu Wang</style></author><author><style face="normal" font="default" size="100%">Yuanyuan Tian</style></author><author><style face="normal" font="default" size="100%">Sanaz Saki Norouzi</style></author><author><style face="normal" font="default" size="100%">Zilong Liu</style></author><author><style face="normal" font="default" size="100%">Meilin Shi</style></author><author><style face="normal" font="default" size="100%">Colby K. Fisher</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The KnowWhereGraph Ontology</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Shirly Stephen</style></author><author><style face="normal" font="default" size="100%">Rui Zhu</style></author><author><style face="normal" font="default" size="100%">Kitty Currier</style></author><author><style face="normal" font="default" size="100%">Mark Schildhauer</style></author><author><style face="normal" font="default" size="100%">Dean Rehberger</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Colby K. Fisher</style></author><author><style face="normal" font="default" size="100%">Mohammad Saeid Mahdavinejad</style></author><author><style face="normal" font="default" size="100%">Antrea Christou</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Sanaz Saki Norouzi</style></author><author><style face="normal" font="default" size="100%">Zilong Liu</style></author><author><style face="normal" font="default" size="100%">Meilin Shi</style></author><author><style face="normal" font="default" size="100%">Ling Cai</style></author><author><style face="normal" font="default" size="100%">Gengchen Mai</style></author><author><style face="normal" font="default" size="100%">Zhangyu Wang</style></author><author><style face="normal" font="default" size="100%">Yuanyuan Tian</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The KnowWhereGraph Ontology: A Showcase</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Shirly Stephen</style></author><author><style face="normal" font="default" size="100%">Antrea Christou</style></author><author><style face="normal" font="default" size="100%">Kitty Currier</style></author><author><style face="normal" font="default" size="100%">Mohammad Saeid Mahdavinejad</style></author><author><style face="normal" font="default" size="100%">Sanaz Saki Norouzi</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Adrita Barua</style></author><author><style face="normal" font="default" size="100%">Colby K. Fisher</style></author><author><style face="normal" font="default" size="100%">Anthony D’Onofrio</style></author><author><style face="normal" font="default" size="100%">Thomas Thelen</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Dean Rehberger</style></author><author><style face="normal" font="default" size="100%">Mark Schildhauer</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">KnowWhereGraph-Lite: A Perspective of the KnowWhereGraph</style></title><secondary-title><style face="normal" font="default" size="100%">KGSWC 2023</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>27</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author><author><style face="normal" font="default" size="100%">Krzysztof Janowicz</style></author><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Aaron Eberhart</style></author><author><style face="normal" font="default" size="100%">Andrew Eells</style></author><author><style face="normal" font="default" size="100%">Sanaz Saki Norouzi</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Openness and Transparency in Academic Publishing: A Decade of Data from the Semantic Web Journal</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Bridging Upper Ontology and Modular Ontology Modeling: A Tool and Evaluation</style></title><secondary-title><style face="normal" font="default" size="100%">KGSWC-2021</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Ontologies are increasingly used as schema for knowledge graphs in many application areas. As such, there are a variety of different approaches for their development. In this paper, we describe and evaluate UAO (for Upper Ontology Alignment Tool), which is an extension to CoModIDE, a graphical prote'ge'&amp;nbsp;plugin for modular ontology modeling. UAO enables ontology engineers to combine modular ontology modeling with a more traditional ontology modeling approach based on upper ontologies. We posit -- and our evaluation supports this claim -- that the tool does indeed makes it easier to combine both approaches. Thus, UAO enables a best-of-both-worlds approach. The evaluation consists of a user study, and the results show that performing typical manual alignment modeling tasks is relatively easier with UAO than doing it with porte'ge' alone, in terms of the time required to complete the task and improving the correctness of the output. Additionally, our test subjects provided significantly higher ratings on the System Utilization Scale for UOA.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author><author><style face="normal" font="default" size="100%">Cogan Shimizu</style></author><author><style face="normal" font="default" size="100%">Pascal Hitzler</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Modular Ontology Modeling Meets Upper Ontologies:  The Upper Ontology Alignment Tool</style></title><secondary-title><style face="normal" font="default" size="100%">The 19th International Semantic Web Conference</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2020</style></date></pub-dates></dates><volume><style face="normal" font="default" size="100%">2721</style></volume><pages><style face="normal" font="default" size="100%">119-124</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;We provide an extension to the Prote'ge'-based modular&amp;nbsp;ontology engineering tool CoModIDE, in order to make it possible for ontology engineers to adhere to traditional ontology modeling processes based on upper or foundational ontologies. As a bridge between the more recently proposed modular ontology modeling approach and more classical ones based on foundational ontologies, it enables a best-of-both-worlds approach for ontology engineering.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>32</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Abhilekha Dalal</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Modular Ontology Modeling Meets Upper Ontologies:  The Upper Ontology Alignment Tool</style></title><secondary-title><style face="normal" font="default" size="100%">Department of Computer Science</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2020</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">Kansas State University</style></publisher><pub-location><style face="normal" font="default" size="100%">Manhattan</style></pub-location><volume><style face="normal" font="default" size="100%">Masters</style></volume><pages><style face="normal" font="default" size="100%">35</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Ontology modeling has become a primary approach to schema generation for data integration and knowledge graphs in many application areas. The quest for efficient approaches to model useful and re-useable ontologies has led to different ontology creation proposals over the years. The project focuses on two major approaches, modeling using a top-level ontology, and the other is modular ontology modeling.&lt;/p&gt;

&lt;p&gt;The traditional approach is based on top-level ontology, and the strategy is to utilize ontology that is comprehensive enough to cover a broad spectrum of domains through their universal terminologies. In this way, all domain ontologies share a common top-level formal ontology in which their respective root nodes can be defined, and hence consistency is assured across the knowledge graph. Nevertheless, the most recent approach is quite different and is a refinement of the eXtreme Ontology Design methodology based on the ontology design patterns. Whole ontology is viewed as a collection of interconnected modules, and modules are developed around the classified fundamental notions according to experts' terminology or the use-case. Having developed modules in a fashion of divide and conquer, these modules are shareable and reusable among some other ontology if needed, and consequently, the ontology being FAIR is justified (findable, accessible, interoperable, and reusable).&lt;/p&gt;

&lt;p&gt;Although, it has been argued that there are advantages to either paradigm, it is possible to have a combination of both approaches mentioned earlier, depending upon the use-case or the preferences of the ontology engineers. We provide an extension to the Protégé - based modular ontology engineering tool CoModIDE, in order to make it possible for ontology engineers to follow traditional, ad-hoc ontology modeling approach, alongside more modern paradigms such as modular ontology engineering. The project focuses on domain-level ontology developers or organizations dealing with ontology development, which may get help through the plugin in minimizing the tooling gap to unite paradigms and develop robust, flexible ontologies suitable to their needs. As a bridge between the more recently proposed modular ontology modeling approach and more classical ones based on foundational ontologies, it enables a best-of-both-worlds approach for ontology engineering.&amp;nbsp;&lt;/p&gt;
</style></abstract></record></records></xml>