Dave, I have been getting "less than stellar" performance in my benchmarking. I would just like to be sure that the way I am using Jena IS performing inference over in-memory models. I have stored Models in the database. When I access them and create an OntModel, I do it in the following manner:
Store store; // assume this is initialized Model model = SDBFactory.connectNamedModel(store, name); OntModelSpec spec = new OntModelSpec(OntModelSpec.OWL_MEM_MICRO_RULE_INF); OntModel omodel = ModelFactory.createOntologyModel(spec, model); omodel.prepare(); Does this result in an in-memory model as you recommend? If not, could you show the necessary code. It would be great to discover I am doing this wrong and that there is a missing line or usage here that can make things run a lot faster... -----Original Message----- From: Dave Reynolds [mailto:[email protected]] > c.initSdb(OntModelSpec.OWL_MEM_MICRO_RULE_INF); //connect to MYSQL db Not relevant to your problem here but inference over a database will be very slow. It is better to perform any inference over in-memory models.
