I've been on holiday so I haven't responded on this topic but now that I am
home and can type on a "real" keyboard
I have read back over this chain and the previous topic and am still
confused.
I would think that the optimizer would
1) convert the path statement to the equivalent
It does - that's the "as reported" form.
The better
where {
?type rdfs:subClassOf owl:Thing .
?X rdfs:subClassOf? ?type
?obj a ?X .
}
is the expansion but the other way round (and which would be worse in
other cases).
You can try all
Would it not make sense for the optimizer to convert to a form with the
intermediate variable?
Claude
On Mon, Jun 4, 2018, 2:37 PM Andy Seaborne wrote:
> Clause,
>
> "qparse --print=opt"
>
> Because by expansion of "/" you get a cross product BGP and the path
> blocks the optimizer (it does
Andy,
Why does the non path where clause work better?
Claude
Hi Andy,
Thanks for your help.
>> I wanted to control what indexes are generated because I knew the
>> access pattern of my SPARQL queries and also wanted smaller DB size.
>> I think there are toggles to decide what indexes are generated but I
>> did not try to search much.
>
>
> What are the
Hi Siddhesh,
Thank you very much for the report. It is always useful to hear of
people's experiences.
On 30/05/18 15:43, Siddhesh Rane wrote:
For my undergraduate project I used Fuseki 3.6.0 server backed by a TDB dataset.
3 unique SPARQL queries were made against the server by 6 nodes
Thanks Bruno,
I haven't yet written a paper. I'm in the process of experimenting more.
Regards,
Siddhesh
On Thu, May 31, 2018 at 6:41 AM, Claude Warren wrote:
> Just a quick note. There is a cassandra implementation but no work has
> been done on performance tuning.
>
> On a second note. I
Just a quick note. There is a cassandra implementation but no work has
been done on performance tuning.
On a second note. I did some work using bloom filters to do partitioning
that allows adding partitions on demand. Should work for triple store
partitioning as well.
Claude
On Wed, May 30,
: Thursday, 31 May 2018 2:43 AM
Subject: Jena Usage Report: Performance, bugs and feature requests
For my undergraduate project I used Fuseki 3.6.0 server backed by a TDB dataset.
3 unique SPARQL queries were made against the server by 6 nodes in
a Spark cluster returning a total of 150 million
For my undergraduate project I used Fuseki 3.6.0 server backed by a TDB dataset.
3 unique SPARQL queries were made against the server by 6 nodes in
a Spark cluster returning a total of 150 million triples.
As I used DBpedia's dataset, nearly all the entities from Wikipedia
were covered, so my
10 matches
Mail list logo