On Fri, 2011-09-23 at 16:08 +0200, Dennis Patzer wrote: 
> Thanks for your answer. It solved the problem!!

Good.

> I have another more general question: When I integrate other external 
> ontologies, the problem is, that building an inference model takes lots 
> of time. Is there a way to bypass this issue?

If the time is going into the loading of the ontologies over the web
then cache then, e.g. using the OntDocumentManager.

If the time is going into the inference closure itself and if you need
inference (if not just construct a plain no-inference model) then your
options are:
  - pick a faster inference engine
    the best choice of built-in OWL rule sets is OWLMicro, if that's 
    too slow then look at third party reasoners like Pellet
  - if you only need certain entailments tune your own rule set
  - compute the closure ahead of time and store it then reuse that 
    computed closure via a plain no-inference model.

Dave


Reply via email to