Hello

I can't find the page right now, but Virtuoso, at least the open source
version, does not have a comprehensive inference support.

In my PhD Thesis, I've 250MB of N3 files, like 10 million triples. Even
though it's not really big, I couldn't get inference quickly done within
reasonable memory limits (even with pellet) until I simplified my rules of
inference to fit in OWL_micro. Now I answer queries using inference within
5-10 seconds each. If it weren't for my need of rules like "A AND B
SubClassOf C" I could use RDFS inference, which is even faster.

My advice to you would be to create a simpler ontology to get your rules to
fit on OWL_Micro.

Good luck!



--
diogo patrĂ£o




On Thu, Aug 1, 2013 at 8:53 AM, Bahador(reza)? OFOGHI
<[email protected]>wrote:

> THanks, I have been looking at Virtuoso as an alternative since most of my
> instances are now in TDB. But thanks for the info anyway. If I cannot find
> a clean solution with Jena+Virtuoso, then I might use Pellet.
>
>
>
>
> ________________________________
>  From: Dave Reynolds <[email protected]>
> To: [email protected]
> Sent: Thursday, 1 August 2013 8:22 PM
> Subject: Re: OWL reasoning with Jena
>
>
> On 01/08/13 11:06, Bahador(reza)? OFOGHI wrote:
> > Hi,
> >
> > I have a relatively large OWL file (~50MB) I have created as
> OntModel.OWL_MEM. Today I tried to bind it with an OWL reasoner using the
> following code:
> >
> > Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
> > reasoner = reasoner.bindSchema(my50MBModel);
> >
> > And it took forever for the binding process and it did not even return
> before I got disappointed and stopped the process.
> >
> > I wonder if I am missing anything here? Is there any faster way of
> defining the model as an inferece model? What should I expect as average
> query time on this model then?
> >
>
> Reasoning time is totally dependent on the nature of your data. A small
> ontology can be extremely expensive to reason over.
>
> As a default for the rule based reasoners use the OWL Micro
> configuration. It does most of what the fuller configurations do and is
> a lot cheaper.
>
> I trust you are running in-memory and not trying to reason over a TDB or
> SDB model.
>
> If you need complete DL reasoning or if OWL Micro is too slow then use
> Pellet or a commercial solution.
>
> Dave
>

Reply via email to