Thanks for the suggestions, just a couple comments below...

On Dec 18, 2009, at 12:24 PM, Holger Knublauch wrote:

>> Anyway, these rules take quite a long time (about 90 seconds) to  
>> execute on my OWL_MEM model, which isn't all that big at about 3000  
>> triples including all the imported models.  Does that sound  
>> plausible?  If so, I'm thinking I may need to add these inferences  
>> in a more targeted way.
>
> In general, as you certainly know, the performance of SPARQL queries  
> strongly depends on the implementation of the engine and whether (or  
> not) it re-orders clauses automatically. It is best to assume that  
> ARQ will execute the clauses (triple matches and filters) from top  
> to bottom, but looking at the SPARQL Debugger in TBC will help you  
> identify the real execution order. In particular FILTER clauses may  
> be re-ordered with significant performance penalty. There is a new  
> option in TBC on the SPARQL preferences tab to force ARQ to leave  
> the FILTER clauses in place, which is IMHO recommended.
>
> In SPIN in particular, other factors are important:
> - if a class has many subclasses with instances, then there might be  
> many individual SPARQL calls for each rule defined on the  
> superclasses.
There are some class/subclass hierarchies, but currently very little  
instance data (that will change).  Looking at the inferences produced,  
most of the them are from the spin models themselves.  e.g.

[http://spinrdf.org/spin#_arg5, http://www.w3.org/1999/02/22-rdf-syntax-ns#type 
, http://spinrdf.org/sp#SystemClass]

Is there any way to suppress these?  I'm thinking most people running  
inference rules on their own models aren't going to need these  
inferences.   I suppose I could remove the spin model triples from my  
OWL_MEM model before running spin inferences (or maybe remove the  
imports from the import closure of the base model before creating the  
OWL_MEM model?).  A simple flag on the SPINInferences.run would be  
nice though if you could figure out an efficient way to suppress them.


> - Each iteration will have a clause to bind ?this with all instances  
> of a class.
>
> If these cause performance issues, you can bypass the binding of ? 
> this by either
> - making rules global (drop ?this and instead put them at owl:Thing  
> or rdfs:Resource) These global rules will be executed only once
Yes, that is where I put my my subclass and inverse rules.  However,  
this causes the above mentioned "extraneous" inferences to be produced.

> - use the new SPIN 1.1 property spin:thisUnbound to drop the ?this  
> rdf:type <class> clause, which may theoretically slow down
I'll have to give that a try.
>
> After you run inferences in TBC, the Error Log will contain a list  
> of the slowest queries, together with benchmarks.
I'm not using TBC to run sparql right now since there's no interface  
to SDB named models.  Any progress on that?


>
> I hope this helps... let me know if you have ideas on how to improve  
> performance tuning.
Right now, just the one about an option to suppress inferences on the  
spin models.

Thanks again!
Jeff

>
> Regards,
> Holger
>
> --
>
> You received this message because you are subscribed to the Google  
> Groups "TopBraid Composer Users" group.
> To post to this group, send email to [email protected] 
> .
> To unsubscribe from this group, send email to 
> [email protected] 
> .
> For more options, visit this group at 
> http://groups.google.com/group/topbraid-composer-users?hl=en 
> .
>
>

--

You received this message because you are subscribed to the Google Groups 
"TopBraid Composer Users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/topbraid-composer-users?hl=en.


Reply via email to