%0 Thesis %B Mathematics and Statistical Science %D 2017 %T Propositional Rule Extraction from Neural Networks under Background Knowledge %A Maryam Labaf %A Pascal Hitzler %A Anthony B. Evans %X

It is well-known that the input-output behaviour of a neural network can be recast in terms of a set of propositional rules, and under certain weak preconditions this is also always possible with positive (or definite) rules. Furthermore, in this case there is in fact a unique minimal (technically, reduced) set of such rules which perfectly captures the inputoutput mapping. In this paper, we investigate to what extent these results and corresponding rule extraction algorithms can be lifted to take additional background knowledge into account. It turns out that uniqueness of the solution can then no longer be guaranteed. However, the background knowledge often makes it possible to extract simpler, and thus more easily understandable, rulesets which still perfectly capture the input-output mapping.

%B Mathematics and Statistical Science %V Master %P 50 %8 07/2017 %G eng %9 Master thesis %0 Conference Paper %B Twelfth International Workshop on Neural-Symbolic Learning and Reasoning %D 2017 %T Propositional rule extraction from neural networks under background knowledge %A Maryam Labaf %E Pascal Hitzler %Y Anthony B. Evans %K Background knowledge %K Neural Network %K Propositional Logic %K Rule Extraction %B Twelfth International Workshop on Neural-Symbolic Learning and Reasoning %8 07/2017 %G eng