Hi Marc,

Fundamentally inference is slow and the rule system approach is OK for simple inference (RDFS plus a bit or simple custom rules) but not great for OWL. If you need performant OWL inference then use a DL reasoner such as Pellet.

On your test I see 19s with the full OWL rule set and 190ms with OWLMicro, which is the reasoner profile that's generally closest to being practical. [OWLMini is even slower than the full OWL rule set in this case.]

Note that that ontology is OWL 2 which Jena doesn't support.

Dave

On 14/03/18 20:34, Marc Agate wrote:
Hi,

owlURL is the url of the bdrc.owl file that is used to create the
Ontology model.

public static final String owlURL="https://raw.githubusercontent.com/Bu
ddhistDigitalResourceCenter/owl-schema/master/bdrc.owl";
Marc

Le mercredi 14 mars 2018 à 20:25 +0000, Andy Seaborne a écrit :

On 14/03/18 19:12, Élie Roux wrote:
In the case of inference then yes there is also an upfront cost
of
computing the inferences.  Once computed these are typically
cached
(though this depends on the rule set) and any changes to the
data
might invalidate that cache.  You can call prepare() on the
InfModel
to incur the initial computation cost separately, otherwise the
initial computation cost is incurred by whatever operation first
accesses the InfModel.  And as your email shows subsequent calls
don't
incur that cost and are much faster.

I don't disagree, but I think there's a problem of scale here: even
with
a cold JVM and a not-too-efficient reasoner, it seems totally
unreasonable that a reasoner would take 60 full seconds (that's
what
Marc's test is taking on my machine) to run inference on a very
small
dataset already loaded in memory... 60s for such a small operation
really seems to indicate a bug to me. But maybe it doesn't...


What's owlURL?

The second time cost is does not incur the forward inference rules,
9ms.

(Actually, if you see 60s and Marc sees 18s, something else is going
on
as well)


Thank you,

Reply via email to