Hello,

> Jena's inference is purely in memory so running over a TDB store is
> possible but doesn't give you any scalability and is slower than
> running over an in-memory copy of the same data. Plus, as you already
> know, it's not named-graphs-aware.

Thank you for your clarifying answer! I really think this should be made
clear somewhere in the documentation, as it would have saved us a few
days of tests trying to understand why fuseki didn't behave as we
expected... also maybe a few warning or error messages when fuseki reads
the docs and sees inference on tdb or unionDefaultGraph? Because right
now it just silently misbehave, which is quite painful for users...

> Assuming modest data sizes and static data then you could either:

Well, it's fairly large (a few million triples) and very dynamic... but
we ended up with the following solution, I explain it here in case it
can help people with the same needs:

In the code that transfers data to fuseki, we run a reasoner and add the
inferred triples in the corresponding named graph, and then transfer all
the triples (our data + inferred) to fuseki which stores it into TDB.
This is doable for us because we only need limitted forward chaining
inference that doesn't cross named graphs, but this would certainly not
work for very large inferences such as the OWL full reasoner across the
union graph...

Thank you,
-- 
Elie

Reply via email to