Hi,
I want to load an ontology with imports into an inference model, but
then materialize the triples as no further changes will be made to the
model and we would like to avoid the reasoner overhead for read-only
access.
How does one do the materialization? Simple add() doesn't seem to cut it:
@Test
public void testMaterialize()
{
OntModel infModel =
OntDocumentManager.getInstance().getOntology("https://www.w3.org/ns/ldt#",
OntModelSpec.OWL_MEM_RDFS_INF);
OntModel nonInfModel =
ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM);
nonInfModel.add(infModel);
assertEquals(infModel.size(), nonInfModel.size());
}
expected:<1350> but was:<2264>
Martynas