TY - CHAP T1 - Logics for the Semantic Web T2 - Handbook of the History of Logic Y1 - 2014 A1 - Pascal Hitzler A1 - Jens Lehmann A1 - Axel Polleres ED - Dov. M. Gabbay ED - John Woods ED - Jörg Siekmann JF - Handbook of the History of Logic PB - Elsevier VL - 9 ER - TY - JOUR T1 - Concept learning in description logics using refinement operators JF - Machine Learning Y1 - 2010 A1 - Jens Lehmann A1 - Pascal Hitzler KW - description logics KW - Inductive logic programming KW - OWL KW - refinement operators KW - Semantic Web KW - Structured Machine Learning AB -
With the advent of the Semantic Web, description logics have become one of the most prominent paradigms for knowledge representation and reasoning. Progress in research and applications, however, is constrained by the lack of well-structured knowledge bases consisting of a sophisticated schema and instance data adhering to this schema. It is paramount that suitable automated methods for their acquisition, maintenance, and evolution will be developed. In this paper, we provide a learning algorithm based on refinement operators for the description logic ALCQ including support for concrete roles. We develop the algorithm from thorough theoretical foundations by identifying possible abstract property combinations which refinement operators for description logics can have. Using these investigations as a basis, we derive a practically useful complete and proper refinement operator. The operator is then cast into a learning algorithm and evaluated using our implementation DL-Learner. The results of the evaluation show that our approach is superior to other learning approaches on description logics, and is competitive with established ILP systems.
VL - 78 UR - http://springerlink.metapress.com/content/c040n45u15qrnu44/ ER - TY - JOUR T1 - Extracting Reduced Logic Programs from Artificial Neural Networks JF - Applied Intelligence Y1 - 2010 A1 - Jens Lehmann A1 - Sebastian Bader A1 - Pascal Hitzler AB -Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way.
VL - 32 UR - http://dx.doi.org/10.1007/s10489-008-0142-y ER -