I am testing under several scenarios. For some static cases, I do precompute the inferences and store them. For this case, I do have one open question. If one wants to later combine multiple ontologies and data with their own implied inferencings, is there ever an issue that the original non-inferenced OWL specification are needed, because of their interactions with the inferencing of the other ontologies being combined? Will one lose some triples that would have been inferred if one had started with inferencing done on the original OWL code?
Given OntModelSpec spec = new OntModelSpec(OntModelSpec.OWL_MEM_MICRO_RULE_INF); You are saying that with the following code: Model memmodel = ModelFactory.createDefaultModel(); memmodel.add( dbmodel); OntModel omodel = ModelFactory.createOntologyModel(spec, memmodel); This will cause my "database model" to be completely pulled into memory and placed in the memmodel, so that the OntModel can run much more efficiently? Whereas with the following model omodel it will always go to the database? Model dbmodel = SDBFactory.connectNamedModel(store, name); OntModel omodel = ModelFactory.createOntologyModel(spec, model); That probably explains the slowness. With my current code, the initialization of the Model and OntModel did seem relatively fast. -----Original Message----- From: Dave Reynolds [mailto:[email protected]] Sent: Friday, April 05, 2013 10:39 AM To: [email protected] Subject: Re: Persisting OWL in Jena On 05/04/13 15:09, David Jordan wrote: > Dave, > I have been getting "less than stellar" performance in my benchmarking. I > would just like to be sure that the way I am using Jena IS performing > inference over in-memory models. I have stored Models in the database. When I > access them and create an OntModel, I do it in the following manner: > > Store store; // assume this is initialized Model model = > SDBFactory.connectNamedModel(store, name); OntModelSpec spec = new > OntModelSpec(OntModelSpec.OWL_MEM_MICRO_RULE_INF); > OntModel omodel = ModelFactory.createOntologyModel(spec, model); > omodel.prepare(); > > Does this result in an in-memory model as you recommend? No, that's an inference model running over the database. > If not, could you show the necessary code. Depends on what you are trying to do. Whether your data is static. What inferences you want (all or just some interesting ones). Whether the source data is large. Whether is available as a file or only a database model. Etc. In the simple case your data is essentially fixed and you can precompute and store the inferences. Model memmodel = ModelFactory.createDefaultModel(); // read data into model or use FileUtils.get().loadModel instead OntModelSpec spec = new OntModelSpec(OntModelSpec.OWL_MEM_MICRO_RULE_INF); OntModel omodel = ModelFactory.createOntologyModel(spec, memmodel); dbmodel.add( omodel ); If there are only some inferences you need then you might be more selective in what the final "add" phase puts into the database model. Then you access that data in future uses via a non-inference model: Model dbmodel = SDBFactory.connectNamedModel(store, name); OntModelSpec spec = new OntModelSpec(OntModelSpec.OWL_MEM); OntModel omodel = ModelFactory.createOntologyModel(spec, dbmodel); If your data is already in the database and you want to dynamically compute the inferences over its current state then do something more like: Model memmodel = ModelFactory.createDefaultModel(); memmodel.add( dbmodel ); OntModelSpec spec = new OntModelSpec(OntModelSpec.OWL_MEM_MICRO_RULE_INF); OntModel omodel = ModelFactory.createOntologyModel(spec, memmodel); // use omodel Any updates to the data will need to be reflected into the omodel. If those updates are done in the same VM that might be OK, if they are done by other database clients then that's problematic. Fundamentally databases and Jena's rule-based inference do not mix well. Depending on what you need from inference you may be able to achieve the same effects by query rewriting, or query rewriting plus some simpler pre-computed closure. In the worst case you need a full deductive database. For minimal RDFS inference then there is some support in the TDB loader for computing that more efficiently at load time than the full in-memory rule systems do. Dave
