Well, I was doing things so fast and furiously that I'm not for sure
what it was that helped, but I wasn't tweaking any of my queries so it
wasn't anything like that. I think it had mostly to do with making
sure I was using an "in memory" model instead of a database backed
model. When I created an Ontology model like:
OntModel ontModel = ModelFactory.createOntologyModel(_ontSpec,
_baseModel);
and then created a SDB backed Model from ontModel (as suggested on the
Jena board since ontModel was very slow) and then fed that SDB backed
model into SPINInferences.run, it was slow, with the times I quoted.
But when I created a memory only model from ontModel above, e.g.
Model model = ModelFactory.createDefaultModel();
model.notifyEvent(GraphEvents.startRead);
try {
model.add(ontModel);
} finally {
model.notifyEvent(GraphEvents.finishRead);
}
and ran that model through SPINInferences.run, the overall time was
much faster. Does that make sense? I think I'm just starting to come
up to speed on the inner and inter-workings of all these different
kinds of models and when to use which kind. Certainly can make the
head spin.
On Dec 18, 2009, at 11:14 PM, Holger Knublauch wrote:
> I am glad to hear that. If you have any lessons learnt on the "dumb
> things" then we would appreciate if you could share them with others
> on this mailing list. For example, changing the order of clauses can
> reduce many orders of magnitude off query execution times, e.g. see
>
> http://ascensionsemantica.blogspot.com/2009/11/bad-bad-sparql-pattern-good-spintrace.html
>
> Holger
>
>
> On Dec 18, 2009, at 12:47 PM, Jeffrey Schmitz wrote:
>
>> Actually, after fixing a few dumb things I was doing, the speed has
>> improved immensely. In the end, I don't think supressing the spin
>> model inferences really buys much since inferences are now running in
>> under a second, even with those in.
>>
>> Jeff
>>
>> On Dec 18, 2009, at 12:24 PM, Holger Knublauch wrote:
>>
>>>> Anyway, these rules take quite a long time (about 90 seconds) to
>>>> execute on my OWL_MEM model, which isn't all that big at about 3000
>>>> triples including all the imported models. Does that sound
>>>> plausible? If so, I'm thinking I may need to add these inferences
>>>> in a more targeted way.
>>>
>>> In general, as you certainly know, the performance of SPARQL queries
>>> strongly depends on the implementation of the engine and whether (or
>>> not) it re-orders clauses automatically. It is best to assume that
>>> ARQ will execute the clauses (triple matches and filters) from top
>>> to bottom, but looking at the SPARQL Debugger in TBC will help you
>>> identify the real execution order. In particular FILTER clauses may
>>> be re-ordered with significant performance penalty. There is a new
>>> option in TBC on the SPARQL preferences tab to force ARQ to leave
>>> the FILTER clauses in place, which is IMHO recommended.
>>>
>>> In SPIN in particular, other factors are important:
>>> - if a class has many subclasses with instances, then there might be
>>> many individual SPARQL calls for each rule defined on the
>>> superclasses.
>>> - Each iteration will have a clause to bind ?this with all instances
>>> of a class.
>>>
>>> If these cause performance issues, you can bypass the binding of ?
>>> this by either
>>> - making rules global (drop ?this and instead put them at owl:Thing
>>> or rdfs:Resource) These global rules will be executed only once
>>> - use the new SPIN 1.1 property spin:thisUnbound to drop the ?this
>>> rdf:type <class> clause, which may theoretically slow down
>>>
>>> After you run inferences in TBC, the Error Log will contain a list
>>> of the slowest queries, together with benchmarks.
>>>
>>> I hope this helps... let me know if you have ideas on how to improve
>>> performance tuning.
>>>
>>> Regards,
>>> Holger
>>>
>>> --
>>>
>>> You received this message because you are subscribed to the Google
>>> Groups "TopBraid Composer Users" group.
>>> To post to this group, send email to
>>> [email protected]
>>> .
>>> To unsubscribe from this group, send email to
>>> [email protected]
>>> .
>>> For more options, visit this group at
>>> http://groups.google.com/group/topbraid-composer-users?hl=en
>>> .
>>>
>>>
>>
>> --
>>
>> You received this message because you are subscribed to the Google
>> Groups "TopBraid Composer Users" group.
>> To post to this group, send email to
>> [email protected]
>> .
>> To unsubscribe from this group, send email to
>> [email protected]
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/topbraid-composer-users?hl=en
>> .
>>
>>
>
> --
>
> You received this message because you are subscribed to the Google
> Groups "TopBraid Composer Users" group.
> To post to this group, send email to [email protected]
> .
> To unsubscribe from this group, send email to
> [email protected]
> .
> For more options, visit this group at
> http://groups.google.com/group/topbraid-composer-users?hl=en
> .
>
>
--
You received this message because you are subscribed to the Google Groups
"TopBraid Composer Users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/topbraid-composer-users?hl=en.