Hi Rob,

Just a quick question, before sending all of you ontology, and examples.
I have snipped part of the email.

On 2017-10-11 15:47, Rob Vesse wrote:
> Comments inline:
> 
> On 11/10/2017 11:57, "George News" <[email protected]> wrote:
> 
> 
> [...snipped...]
> 
> - The depth level of the graph or the information relationship is
> around 7-8 level at most, but most of the times it is required to
> link 3-4 levels.
> 
> Difficult to say how this impacts performance because it really
> depends on how you are querying that structure
> 
> - Most of the queries include several: ?x myont:hasattribute ?b. ?a
> rdf:type ?b.
> 
> Therefore checking the class and subclasses of entities. Is there
> anyway to speed up the inference as if I'm asking for the parent
> class I will get also the children ones defined in my ontology.
> 
> So are you actively using inference? If you are then that will
> significantly degrade performance because the inference closure is
> done entirely in memory i.e. not in TDB if inference is turned on and
> you will get minimal performance benefit from using TDB.
> 
> If you only need simple inference like class and property hierarchy
> you may be better served by asserting those statically using SPARQL
> updates and not using dynamic inference

I would like further understanding on the fact that inference is done in
memory and how this is achieved.

My initial idea was to include the full ontology model in TDB together
with the data in the same named graph. This way when I retrieved the
model from the dataset will include all the information and enable the
inference in the sparql engine whenever possible/required.

Based on your comment, is the model (ontology + data) uploaded to
memory? Would there be any difference between my approach and creating a
union between the model and the ontology and use it as the input for the
QueryExecution?

Below you can find a pseudocode of the options:

1) Initial approach
- Create dataset
- Load ontology model (modelOntology)
- Create named graph using modelOntology as the initial data
- Add new semantic entities to the named graph

- Retrieve the named graph (getNamedGraph)
- qExec = QueryExecutionFactory.create(query, m);
- Read results

2) Optional approach to enable inference only when desired
- Create dataset
- Load ontology model on a static variable (staticModelOntology)
- Create empty named graph
- Add new semantic entities to the named graph

- Retrieve the named graph (getNamedGraph)
- ModelFactory.createUnion(namedGraph,staticModelOntology)
- qExec = QueryExecutionFactory.create(query, m);
- Read results


What are the real differences between option 1 and 2? I was also
thinking on extending option 2 to have the ontology model stored in TDB
as another named graph. Would it increase performance?

Thanks a lot.
Jorge




Reply via email to