Andy, I had previously been storing the inferenced form of an ontology WITHOUT transactions. But when I added the use of transactions, I have run into both out-of-memory issues and significant performance problems.
Is the code below the best way to create a separate inferenced model with use of transactions? Dataset dataset = // assume this is initialized properly try { dataset.begin(ReadWrite.WRITE); Model model = dataset.getNamedModel(modelName); // already stored non-inferenced data OntModel omodel = ModelFactory.createOntologyModel(spec, model); omodel.prepare(); Model pomodel = dataset.getNamedModel(inferredName); // this is new model to contain inferenced data pomodel.add(omodel); // add inferenced ontology as data dataset.commit(); } finally { dataset.end(); } Another question: You refer below to the "backwards rule engine". Could you be specific as to the OntModelSpec you are referring to? Or is it simply that OntModel.prepare() is an explicit call to execute the forward rules and no use of prepare just does backward rules on demand, but only for what is necessary to answer the particular query? -----Original Message----- From: Andy Seaborne [mailto:a...@apache.org] Sent: Thursday, May 02, 2013 8:50 AM To: users@jena.apache.org Subject: Re: transaction and caching How to best handle inference will depend on how it's being used. There is a tension between the full rules engine and having database data. If you can maintain the data in the database only, by infererring once and writing inferred triples to the database but update is now not automatic. If that does not work well for you, then you might consider doing inference at query time with the backwards rule engine. Normally, the rules engine is hybrid - some forward rules, executed once on .prepare() then backwards rules on-demand at query time. When a lot of overhead from the .prepare is going to waste, not used in the transaction, backwards makes sense, delaying work until needed during transaction execution. (Maybe Dave can add something here). Andy