Just a quick point before I look into how to use union graphs.

The SDB backed model I was using wasn't an OntModel, it was just  
produced by copying the OntModel into it.  e.g.

                _store = SDBFactory.connectStore("sdb.ttl");
                 Model _sdbModel =  
SDBFactory.connectNamedModel(_store, URI);
                 _sdbModel.add(ontModel)

but it seemed to still be much slower than just using a purely in  
memory model.

Since the only reason for me to persist the inferred models would be  
for debugging, I think using the memory only inference models (with an  
option to make them SDB backed) may be the way to go.  The drawback  
will be that I'll have to re-generate the inference models on any  
change to the SDB backed "base" model, right?

So, right now I'm thinking of managing the following three Models (not  
OntModels) during runtime for each "A-Box" model that needs to be  
persisted .

1.  A SDB backed baseModel that just contains the A-Box model triples
2. A memory only memModel that is basically a copy of an OWL_MEM  
OntModel based on the baseModel, with possibly some specially selected  
SPIN inferences.
3. A memory only infModel that is a copy of the memModel, plus all the  
generalized SPIN Inferences (mainly those attached to OWL:Thing).

Any comments on this approach?

Thanks,
Jeff

On Dec 19, 2009, at 7:08 AM, Holger Knublauch wrote:

> Jeff,
>
> whenever you have a MultiUnion graph (or OntModel) that consists of  
> more than one sub-graph, then the performance of the SPARQL engine  
> might go down significantly. This is because the triple matches may  
> need to dynamically merge partial results from multiple sub-graphs.  
> On the other hand, if you just have a single graph (incl a single  
> SDB graph), then the system can exploit native optimizations and do  
> complex graph patterns with a single, optimized operation. In  
> practice my guess is that you will have best performance if you put  
> all sub-graphs into the same SDB (possibly split into named graphs)  
> and then operate on the union graph (via the named graph 
> <urn:x-arq:UnionGraph 
> >).
>
> Holger
>
>
> On Dec 18, 2009, at 9:40 PM, Jeffrey Schmitz wrote:
>
>> Well, I was doing things so fast and furiously that I'm not for sure
>> what it was that helped, but I wasn't tweaking any of my queries so  
>> it
>> wasn't anything like that.  I think it had mostly to do with making
>> sure I was using an "in memory" model instead of a database backed
>> model.  When I created an Ontology model like:
>>
>>              OntModel ontModel = ModelFactory.createOntologyModel(_ontSpec,
>> _baseModel);
>>
>> and then created a SDB backed Model from ontModel (as suggested on  
>> the
>> Jena board since ontModel was very slow) and then fed that SDB backed
>> model into SPINInferences.run, it was slow, with the times I quoted.
>>
>> But when I created a memory only model from ontModel above, e.g.
>>
>>              Model model = ModelFactory.createDefaultModel();
>>              model.notifyEvent(GraphEvents.startRead);
>>              try {
>>                      model.add(ontModel);
>>              } finally {
>>                      model.notifyEvent(GraphEvents.finishRead);
>>              }
>>
>> and ran that model through SPINInferences.run, the overall time was
>> much faster.  Does that make sense?  I think I'm just starting to  
>> come
>> up to speed on the inner and inter-workings of all these different
>> kinds of models and when to use which kind.  Certainly can make the
>> head spin.
>>
>>
>>
>> On Dec 18, 2009, at 11:14 PM, Holger Knublauch wrote:
>>
>>> I am glad to hear that. If you have any lessons learnt on the "dumb
>>> things" then we would appreciate if you could share them with others
>>> on this mailing list. For example, changing the order of clauses can
>>> reduce many orders of magnitude off query execution times, e.g. see
>>>
>>> http://ascensionsemantica.blogspot.com/2009/11/bad-bad-sparql-pattern-good-spintrace.html
>>>
>>> Holger
>>>
>>>
>>> On Dec 18, 2009, at 12:47 PM, Jeffrey Schmitz wrote:
>>>
>>>> Actually, after fixing a few dumb things I was doing, the speed has
>>>> improved immensely.  In the end, I don't think supressing the spin
>>>> model inferences really buys much since inferences are now  
>>>> running in
>>>> under a second, even with those in.
>>>>
>>>> Jeff
>>>>
>>>> On Dec 18, 2009, at 12:24 PM, Holger Knublauch wrote:
>>>>
>>>>>> Anyway, these rules take quite a long time (about 90 seconds) to
>>>>>> execute on my OWL_MEM model, which isn't all that big at about  
>>>>>> 3000
>>>>>> triples including all the imported models.  Does that sound
>>>>>> plausible?  If so, I'm thinking I may need to add these  
>>>>>> inferences
>>>>>> in a more targeted way.
>>>>>
>>>>> In general, as you certainly know, the performance of SPARQL  
>>>>> queries
>>>>> strongly depends on the implementation of the engine and whether  
>>>>> (or
>>>>> not) it re-orders clauses automatically. It is best to assume that
>>>>> ARQ will execute the clauses (triple matches and filters) from top
>>>>> to bottom, but looking at the SPARQL Debugger in TBC will help you
>>>>> identify the real execution order. In particular FILTER clauses  
>>>>> may
>>>>> be re-ordered with significant performance penalty. There is a new
>>>>> option in TBC on the SPARQL preferences tab to force ARQ to leave
>>>>> the FILTER clauses in place, which is IMHO recommended.
>>>>>
>>>>> In SPIN in particular, other factors are important:
>>>>> - if a class has many subclasses with instances, then there  
>>>>> might be
>>>>> many individual SPARQL calls for each rule defined on the
>>>>> superclasses.
>>>>> - Each iteration will have a clause to bind ?this with all  
>>>>> instances
>>>>> of a class.
>>>>>
>>>>> If these cause performance issues, you can bypass the binding of ?
>>>>> this by either
>>>>> - making rules global (drop ?this and instead put them at  
>>>>> owl:Thing
>>>>> or rdfs:Resource) These global rules will be executed only once
>>>>> - use the new SPIN 1.1 property spin:thisUnbound to drop the ?this
>>>>> rdf:type <class> clause, which may theoretically slow down
>>>>>
>>>>> After you run inferences in TBC, the Error Log will contain a list
>>>>> of the slowest queries, together with benchmarks.
>>>>>
>>>>> I hope this helps... let me know if you have ideas on how to  
>>>>> improve
>>>>> performance tuning.
>>>>>
>>>>> Regards,
>>>>> Holger
>>>>>
>>>>> --
>>>>>
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "TopBraid Composer Users" group.
>>>>> To post to this group, send email to 
>>>>> [email protected]
>>>>> .
>>>>> To unsubscribe from this group, send email to 
>>>>> [email protected]
>>>>> .
>>>>> For more options, visit this group at 
>>>>> http://groups.google.com/group/topbraid-composer-users?hl=en
>>>>> .
>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> You received this message because you are subscribed to the Google
>>>> Groups "TopBraid Composer Users" group.
>>>> To post to this group, send email to 
>>>> [email protected]
>>>> .
>>>> To unsubscribe from this group, send email to 
>>>> [email protected]
>>>> .
>>>> For more options, visit this group at 
>>>> http://groups.google.com/group/topbraid-composer-users?hl=en
>>>> .
>>>>
>>>>
>>>
>>> --
>>>
>>> You received this message because you are subscribed to the Google
>>> Groups "TopBraid Composer Users" group.
>>> To post to this group, send email to 
>>> [email protected]
>>> .
>>> To unsubscribe from this group, send email to 
>>> [email protected]
>>> .
>>> For more options, visit this group at 
>>> http://groups.google.com/group/topbraid-composer-users?hl=en
>>> .
>>>
>>>
>>
>> --
>>
>> You received this message because you are subscribed to the Google  
>> Groups "TopBraid Composer Users" group.
>> To post to this group, send email to 
>> [email protected] 
>> .
>> To unsubscribe from this group, send email to 
>> [email protected] 
>> .
>> For more options, visit this group at 
>> http://groups.google.com/group/topbraid-composer-users?hl=en 
>> .
>>
>>
>
> --
>
> You received this message because you are subscribed to the Google  
> Groups "TopBraid Composer Users" group.
> To post to this group, send email to [email protected] 
> .
> To unsubscribe from this group, send email to 
> [email protected] 
> .
> For more options, visit this group at 
> http://groups.google.com/group/topbraid-composer-users?hl=en 
> .
>
>

--

You received this message because you are subscribed to the Google Groups 
"TopBraid Composer Users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/topbraid-composer-users?hl=en.


Reply via email to