It is well-known that the input-output behaviour of a neural network can be recast in terms of a set of propositional rules, and under certain weak preconditions this is also always possible with positive (or definite) rules. Furthermore, in this case there is in fact a unique minimal (technically, reduced) set of such rules which perfectly captures the inputoutput mapping. In this paper, we investigate to what extent these results and corresponding rule extraction algorithms can be lifted to take additional background knowledge into account. It turns out that uniqueness of the solution can then no longer be guaranteed. However, the background knowledge often makes it possible to extract simpler, and thus more easily understandable, rulesets which still perfectly capture the input-output mapping.

1 aLabaf, Maryam1 aHitzler, Pascal1 aEvans, Anthony, B. uhttps://daselab.cs.ksu.edu/publications/propositional-rule-extraction-neural-networks-under-background-knowledge00643nas a2200169 4500008004100000245008300041210006900124260001200193653002500205653001900230653002400249653002000273100001800293700002000311700002300331856011900354 2017 eng d00a Propositional rule extraction from neural networks under background knowledge0 aPropositional rule extraction from neural networks under backgro c07/201710aBackground knowledge10aNeural Network10aPropositional Logic10aRule Extraction1 aLabaf, Maryam1 aHitzler, Pascal1 aEvans, Anthony, B. uhttps://daselab.cs.ksu.edu/publications/propositional-rule-extraction-neural-networks-under-background-knowledge-0