Hi Rajeev,

I will have to inquire of development as to the internal implementation, although you should be able to use the virtuoso explain command with isql or the conductor interactive sql interface to see how the command is compiled internally for execution, to see if any optimizations are applied. The syntax for using the explain function is:

        explain('sparql <your sparql query>')

Best Regards
Hugh Williams
Professional Services
OpenLink Software

On 13 Jun 2008, at 15:23, Rajeev J Sebastian wrote:

On Fri, Jun 13, 2008 at 4:31 PM, Hugh Williams <hwilli...@openlinksw.com> wrote:
Hi Rajeev,

The inference costs are directly related to the complexity of the inference rules. For subclass, and subproperty we haven't see anything significant based on theYago rules applied to DBpedia recently. Is their a specific
case/instance in which you are encountering a performance loss using
inferencing ? For instance, we inference on owl:samAs, which when enabled means smush/mesh/combine two identical URIs that have similar and dissimilar
datagraphs e.g. imagine you had two blogs an old and a new one, if
owl:sameAs is declared via a rule, and the owl:sameAs inferencing is enabled via SPARQL pragma, you will end up with blog posts from both blog systems via the URI of either. In this case, the data retrieved is larger, and at an expected cost, but it this isn't due to the actual act of "inferencing", but
the results of inferencing.

Thanks Hugh.

What I meant was suppose I had two patterns like

?s rdf:type some:Class option (inference 'inft')

and

?s rdf:type some:Class

and assuming that the result set of both patterns is the exact same
set of triples, i.e., some:Class doesnt have any subclasses.

In this case, performance wise, would "?s rdf:type some:Class option
(inference 'inft')" degenerate into "?s rdf:type some:Class" through
some kind of optimization ?

Regards
Rajeev J Sebastian


Reply via email to