Thanks for the tips. Seems to speed things up overall, especially removing the schema and tdbModel from the materialised model as you suggested.
Much appreciated, Steve. On 08/07/2014 17:17, "Dave Reynolds" <[email protected]> wrote: .. snipped .. >Just write out the whole model, there will be a lot more inferred >triples than there are base triples so you won't be saving much by >omitting the base ones. > >The issue is that the reasoners in general, and OWL_Micro specifically, >use a mix of forward and backward deductions. > >The forward deductions are indeed stored separately and can be obtained >by getDeductionsModel. > >However, the backward deductions are only computed on demand in response >to queries. Some of those are indirectly cached in the backward >reasoner's tabled predicates but others are never cached. So the only >way to obtain all deductions to ask the most general query. > >There are a few things you can do which might help performance. > >First, you could materialize all the triples before you write them out. >The writer will make a lot of separate calls so anything that isn't >cached may be recomputed. So try something like: > > Model myMaterializedModel = ModelFactory.createDefaultModel(); > myMaterializedModel.add ( ont ); > >Then you can write out myMaterializedModel or if you really to you could >remove the base models before doing so: > > myMaterializedModel.remove( schema ); > myMaterializedModel.remove( tdbModel ); > >Second, given that the reasoning is being done in memory you may find it >more efficient to copy tdbModel into a memory model first, then wrap the >reasoner round that. > >Third, if for your purposes there are only certain types of queries you >need to run you may chose to materialize only some of the inferences. >For example if you only care about inferred types you could perform more >restricted materializations such as: > > myMaterializedModel.add( ont.listStatements(null, RDF.type, >(RDFNode)null) ); > >Dave >
smime.p7s
Description: S/MIME cryptographic signature
