OWL Modeling through rules: The ROWLTab Protégé plug-in evaluated

OWL can be modeled by writing axioms manually and also by using rules. But which one is quicker? which one is less error prone?

We have done a experiment to find whether standard protege or ROWLTab is better for ontology modeling in terms of of speed and error. The results are reported in Md. Kamruzzaman Sarker,  Adila A. Krisnadhi, David Carral and Pascal Hitzler, Rule-based OWL Modeling with ROWLTab Protege Plugin. To appear in Proceedings ESWC 2017, and an excerpt can be found below.

We have used Standard Protege and the ROWLTab pluing of Protege to model ontology in two different way. Standard protege was used to write axioms manually and ROWLTab plugin was used to for rule.

We have used total 12 questions to evaluate the performance. According to difficulty level, 12 questions was divided into 3 groups. 

Difficulty Level:

1. Easy

2. Medimu

3. Hard

From each group 2 questions was asked to model using Standard Protege and other 2 question using ROWLTab plugin interface. 

So in total 6 questions (2 easy + 2 medium + 2 hard ) was asked to model using Standard Protege and other 6 questions (2 easy + 2 medium + 2 hard ) was asked to model using ROWLTab plugin interface. 

 

Questions

Easy

1. Every father is a parent.

2. Every parent is a Human.

3. Every university is an educational institution

4. Every educational institution is an organization

 

Medium

5. If a person has a mother then that mother is a parent.

6. If a person has a parent who is female, then this parent is a mother.

7. Any educational institution that awards a medical degree is a medical school.

8. Any university that is funded by a state government is a public university.

 

Hard

9. If a person's brother has a son, then that son is the first person's nephew.

10. If a person has a female child, then that person would have that female child as her daughter.

11. All forests are more biodiverse than any desert.

12. All teenagers are younger than all twens.

 

There was total 12 participants. To reduce learning effect on evaluation process 6 participants was asked to model using ROWLTab plugin first and other 6 participants was asked to model using Standard Protege first. 

Performance was measured using 3 criteria. 

1. Quickness: Average time required to model a question. 

2. Input needed: Avergae number of input(number of Key press + number of Mouse click) required to model a question. 

3. Correctnes:  How correct the modeled answer is.

 

 

Experiment result:

Quickness:  Quickness is measured how many seconds is required on average for each question. Time is calculated as follows: 

avgTime = average of 4 questions of same difficulty

This time is further divided by 12 to get avarage for each participant. 

avgTime =  avgTime / 12

By using this formula we got the below result: 

 

Input needed: Avergae number of input(number of Key press + number of Mouse click) required to model a question. 

This is also averaged using the upper formula. 

The result we got: 

 

 

The experiment shows us that, for easy question Standard Protege is performing better while for medium and hard questions ROWLTab plugin is performing best.

Full Results:

Full raw results can be found at https://github.com/md-k-sarker/ROWLPluginEvaluation/tree/master/results . A detailed description and evaluation is available as pdf

Software used to conduct the experiment: 

Software with it's source code is available at: https://github.com/md-k-sarker/ROWLPluginEvaluation

Acknowledgement

This work was supported by the National Science Foundation under award 1017225 III: Small: TROn – Tractable Reasoning with Ontologies.