Hi all, I spent the last days heavily working on a more extensive mapping from SPARQL to SQL, so most SPARQL constructs should now be supported natively. This can result in dramatic speed-ups compared to the previous implementation or the Sesame in-memory implementation, in my experiments up to a factor of 1000 with PostgreSQL, because the database can do an amazing job in query planning when it knows the full query.
However, translating SPARQL into SQL is a very complex task (probably the most complex part of code in Marmotta), because both languages have very different semantics even if they share some syntactical constructs. So there will be errors that are currently not covered by our unit tests. I'd like to ask you to play around with the new implementation. Please use the PostgreSQL backend, as it is the one best covered. In case you encounter a SPARQL query that works in other triple stores but gives wrong results or even errors in KiWi, please report it to the Marmotta issue tracker (or by mail if you must). Note that there are certain border cases that I decided to not implement even though the results produced by KiWi differ to the official SPARQL standard. The reason is that these are IMHO non-intuitive special situations where the SPARQL standard has not been designed very well, and to cover them in SQL we would need dirty tricks with significantly more complex queries just to cover a case that almost never occurs. Specifically, the implementation has a different semantics regarding: - ordering of OPTIONAL constructs (see https://openrdf.atlassian.net/browse/SES-1898) - scoping of variables in inner FILTERs in MINUS (see http://www.w3.org/TR/sparql11-query/#neg-notexists-minus) If you want fully compliant semantics, don't use the kiwi-sparql extension ;-) Greetings, Sebastian
