On 20/07/12 10:05, Dave wrote:
         OntModel ontm =
ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_MICRO_RULE_INF);
         ontm.read("file:data/temp.owl","RDF/XML");

         ontm = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM,
ontm);
This is wrapping the inference closure as a new model. I suspect that Olivier is doing is:

ontm = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM, ontm.getBaseMode() )

which would, indeed, remove the inference.

Olivier - I think what you need to do is keep *both* models around, the plain base model to work around the namedHierarchyRoots problem, and the model with the inference engine attached. Then you use the one that you need for a given purpose. This is not an uncommon pattern: sometimes we need the inference closure of the triples, and other times not.

Ian

Reply via email to