Some things could be clearer (and I was in the audience for the paper)

1/ "Java heap size set 8GB" - TDB does not use the heap for file caching.

(this also messes up Sesame for another reason - a 32 Gb machine and they only used 8G for Sesame)

2/ Not clear if the machine was configured to use all memory for memory mapped files. It has been reported that some kernel configs are set to limit mmap usage.

3/ TDB stats and DBpedia data is "interesting". If they just used the default settings, I'd guess the stats (predicate frequencies) are likely to be wild.

My personally feeling is that 1+2 is having an effect. It would interesting to run in direct mode with the in-JVM caches turned up to use 30G or so.

But the real unclarity is the query set - they were asked about this at ISWC and did not give a clear an answer. It is gathered from DBpedia.org - but that has a timeout on queries that people often run into. This (a) skews the queries people ask anyway (b) may skew the queries collected as they did not say whether they collected timed out queries or successful queries.

        Andy



On 21/11/11 10:47, Paolo Castagna wrote:
Thorsten Möller wrote:
Hi

I guess some of you have noticed the recent paper on comparing performance of 
prominent triple stores [1].

Yep [1].

Do you consider it a fair benchmark (set up, data set, queries), considering 
the fact that TDB's performance isn't that good according to the results. Any 
objections, comments on the paper?

Adding DBPSB to the list of JenaPerf benchmarks would probably be an useful 
contribution:
https://svn.apache.org/repos/asf/incubator/jena/Experimental/JenaPerf/trunk/Benchmarks/
I've had no time to run DBPSB myself or look in details each of the DBPSB 
queries, unfortunately. But, this is IMHO something useful to do.

Benchmarks are useful to compare different systems.
However, the best benchmark is the one you build yourself, using your data and 
the queries you need or your users are more likely to write.
The aim of JenaPerf, as I see it, it to make as easy as possible to add new 
datasets and queries and run your benchmark.

Personally, I tend to classify SPARQL queries in two broad categories: 
interactive and analytic.
For interactive queries, I'd like to have sub-second response times. If that is 
not always possible, I desperately want a cache layer in front.
For analytic queries, it does not really matter a few seconds or minutes (or 
hours!) difference. I just want the answer (in a reasonable amount of time).

IMHO SPARQL (as SQL) is an unconstrained language in term of complexity of queries people 
can run... this makes a lot of things more "interesting" and challenging. 
(Compare with heavily constrained
query languages from other NoSQL systems: MQL [2], CQL [3], ...)

Last but not least, IMHO benchmarks are very useful tools in showing people and 
developers that there is room for improvement.
Performances and scalability are very important and I welcome any progress TDB 
and ARQ makes on these aspects.
But, I also value: being an open source project with an active community (and 
hopefully growing) of users and contributors around (including you! :-)), ease 
of installation, extensibility, compliance
with W3C recommendations, ease of integration with other (Java) projects, 
quality and speed of support (via mailing list or commercial alternatives), etc.

Do you want to help adding DBPSB to JenaPerf? :-)

Paolo

  [1] http://markmail.org/message/n32zifhgl52kilkz
  [2] http://wiki.freebase.com/wiki/MQL
  [3] https://www.google.com/#q="Cassandra+Query+Language";



Thorsten

[1] 
http://iswc2011.semanticweb.org/fileadmin/iswc/Papers/Research_Paper/03/70310448.pdf



Reply via email to