On Thu, 2011-09-08 at 15:28 +0000, David Jordan wrote: 
> I am getting ready to write my first application that combines several models 
> that have already been populated in SDB. This application will be doing 
> inferencing to answer some questions.
> 
> One of the models had a relatively large hierarchy, 10s of 1000s of classes. 
> I created a fully inferenced version of this model and saved it into the 
> database as a separate model. This is one of the models I will combine in the 
> application.
> 
> I then have several fairly small models that I have created and stored, all 
> of these in SDB.
> 
> Now I am going to write an application that reads a small OWL file into a new 
> inference model, adding these predefined models stored in the database.

What do you mean by "add" here?

If you mean literally model.add then that will load all the data from
your database into the memory model. This may not be what you want.

> Assume the following models:
> 1.      Large pre-inferenced model (stored in SDB)
> 2.      3 models that have not been pre-inferenced (stored in SDB)
> 3.      A newly created in memory model with inferencing
> 
> I will be combining these models together. When I access the pre-existing 
> models in SDB, to I need to create an inference model for them, or will that 
> happen automatically because I am combining them with the new in-memory 
> inference model?

As above, depends on what you mean by "combine".

To give a single view over such a disparate collection of models you
have a few options:

1. Load all the data into a single memory model, with inference
(presumably not what you mean to do)

2. Create an OntModel (without inference) and use addSubModel to add
each component model as a dynamic union, if you want the inference
closure of your in-memory model then wrap that as an InfModel before you
addSubModel it to the union

3 Use SPARQL DataSets and access all your data with SPARQL rather than
the Java API being explicit about named graphs you are querying

Note that there are some complex tradeoffs here, both in terms of
exactly what inferences you are trying to do and what your access
patterns are.

In particular accessing things at the API level via dynamic unions may
be slow for the database parts because you are asking one triple pattern
at a time and doing joins in your application code, whereas via SPARQL
you can delegate some of the joins to the database. Note that you query
the union of all your SDB models by switching on the
SDBL.defaultUnionGraph flag.

Dave


Reply via email to