Dear Jena users,
I was wondering if, in the case of large datasets, it would be more 
appropriate to store inferences immediately on disk rather than create an in 
memory model and afterwards adding the inferred statements to the base model. 
Until now I managed to achieve only the second solution.

String directory = "MyDatabases/Dataset1";
dataset = TDBFactory.createDataset(directory);
Model base = dataset.getDefaultModel();
OntModel o = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM,base);
printStatements(o); // check base model content
load(base); // some reading from owl files (only for the first run)
OntModel m = ModelFactory.createOntologyModel( OntModelSpec.OWL_MEM_RULE_INF, 
base );
m.prepare();
base.add(m); // if this line is removed the next time the model is created 
printStatements(o) won't show the inferred knowledge
base.close();
dataset.close();

Is there a way to make the reasoner work directly on the tdb model without 
needing for base.add(m)? If there are many entailments could I get an 
outofmemory over the m model?

I tried creating an InfModel, beacuse I found this suggestion at http:
//answers.semanticweb.com/questions/956/how-to-set-up-jena-sdb-with-inference-
capability using the following implementation:

String directory = "MyDatabases/Dataset1";
dataset = TDBFactory.createDataset(directory);
Model base = dataset.getDefaultModel();
OntModel o = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM,base);
printStatements(o); // check base model content
load(base); // some reading from owl files (only for the first run)
Reasoner r = OWLFBRuleReasonerFactory.theInstance().create(null);
InfModel inf=ModelFactory.createInfModel(r, base);
inf.prepare();
base.add(m); // if this line is removed the next time the model is created 
printStatements(o) won't show the inferred knowledge
base.close();
dataset.close();

... same results :( at the second run printStatements(o) print the same things

BR, Paolo

Reply via email to