I have been trying to find a similar evaluation tool, but since each domain is different, and measuring degree of domain-coverage is application dependent, this is difficult and subjective. The only tool I found, based on quality metrics, is OOPS! Pitfall scanner. This will tell you the number and type of bad design choices in each ontology. However, domain-wise, you will have to assess by your own criteria.

Best,
Natalia

--
  Natalia Díaz Rodríguez.  PhD. Student, Computer Engineering
  Department of IT. Åbo Akademi University, Turku, Finland
  Dept. of Computer Science and Artificial Intelligence. University of Granada, 
Spain
  +34 669685055
  https://research.it.abo.fi/personnel/ndiaz



On 20.2.2014 22:04, Ghislain Atemezing wrote:
Hi Bernadette,
Many thanks for your answer.

Can you be a bit more specific? Are you looking for evaluation criteria?
Yes, actually I am looking for evaluation criteria. Let’s say two ontologies X and Y are built for the same domain D, without any Competency Questions (CQ). X and Y reused differently terms from other namespaces. If I was asked to run an experiment with users to evaluate (both qualitative and quantitative) those two ontologies , are they already framework to help me in this task?

There is of course basic vocabulary considerations aimed at helping people review a vocabulary to evaluate its usefulness in the Best Practices document but presumably you're looking for more or something different?? [1]

Thanks again for your advices.

Cheers,
Ghislain




Reply via email to