Update: SPARQL tuple (SELECT) queries are now supported natively
(experimental). All other kinds of SPARQL queries are still using the
Sesame in-memory evaluation.

2015-12-12 17:22 GMT+01:00 Sebastian Schaffert <[email protected]>:

> Hi all,
>
> I've been working on it for a long time outside the main Marmotta tree,
> but even though it is still experimental, it is now mature enough to be
> included in the development repository of Marmotta: a new triple store
> backend implemented in C++ and using LevelDB (http://www.leveldb.org). In
> analogy to KiWi, I named it Ostrich - another bird without wings, but one
> that runs very fast :)
>
> The Ostrich backend is ultra fast compared to Kiwi (I can import 500k
> triples in 7 seconds), but it does not provide the same feature set. In
> particular, the following restrictions apply:
> - limited transaction support; a transaction is active while only
> executing updates, but as soon as you run a query on a connection it will
> auto-commit
> - currently emulated in-memory SPARQL (I started working on direct C++
> SPARQL support, but this is not yet available in Java - but performance is
> promising, so more to come :) )
> - currently emulated LDPath support (I might implement LDPath in C++ if
> the emulated performance is not good enough)
> - currently no reasoner (it's certainly possible, but a lot of work)
> - currently no versioning or snapshotting (might be possible at LevelDB
> level, but didn't investigate much)
>
> The new backend consists of a C++ part (server) and a Java part (client).
> Client and server communicate with each other using Proto messages and gRPC
> (latest snapshot version!). The data model and service definitions are in
> the .proto files found in libraries/ostrich/backend.
>
> If you want to try this out, please have a look at the README.md file
> located in libraries/ostrich/backend. Besides compiling the C++ code
> separately with cmake and make, Marmotta needs to be compiled with
>
> mvn clean install -Postrich -DskipTests
>
> note that the Java code contains tests, but these require a running
> backend. So it is for now better to just skip tests when building the Java
> part completely.
>
> Bulk imports are best done with the C++ command line client (see
> README.md).
>
> Have fun!
>
> Sebastian
>
>
>
> P.S. You can now also use the client to try out native SPARQL support:
>
> ./client/marmotta_client sparql 'select * where { ?s ?p ?o } limit 10'
>
> The result will be a mostly unreadable text formatted dump of the
> resulting proto messages :)
>
>
>
>

Reply via email to