It totally depends on the reasoning that you want to apply. OWL 2 DL is
not possible via simple rules, but for instance RDFS/OLW Horst and OWL
RL can be doen via rule-based materialization.
>  I keep going into details, thank you for responding.
>
> Of the 13 million property assertions, almost 80% are assertions of object 
> properties, ie relationships between individuals. In the last ontology I 
> generated automatically, only for one of the municipalities in Cuba, I had 27 
> 763 887 of object properties assertions, 105 054 data property assertions and 
> 8 158 individuals.
>
> The inference I need is basically the following:
>
> 1) To know all the individuals that belong to a class directly and 
> indirectly, taking into consideration the equivalence between classes and 
> between individuals.
Depends on the reasoning profile and the ontology schema, but might be
covered by SPARQL 1.1 as long as you need only RDFS/OWL RL reasoning.
>
> 2) Given an individual (Ind) and an object property (OP), know all 
> individuals related to "Ind", through OP. Considering the following 
> characteristics of OP: symmetry, functional, transitivity, inverse, 
> equivalence.
>
> 3) Search the direct and indirect subclasses of a class.
SPARQL 1.1 property paths as long as the classes are atomic classes and
not complex class expressions.
>
> 4) Identify all classes equivalent to a class, considering that the 
> equivalence relation is transitive.
>
> 5) Identify the set of superclasses of a class.
SPARQL 1.1 property paths as long as the classes are atomic classes and
not complex class expressions.
>
> Could JENA and TDB afford that kind of inference on my big ontologies?
>
> Excuse me, but I'm not a deep connoisseur of the SPARQL language. I have only 
> used it to access data that is explicit on the ontology, similar to SQL in 
> relational databases, I have never used it (nor do I know if it is possible 
> to do so) to infer implicit knowledge.
The approach that people do is either query rewriting w.r.t. the schema
or forward-chaining, i.e. materialization based on a set of inference
rules. For RDFS, OWL Horst and OWL RL this is possible. Materialization
has to be done only once (given that the dataset does not change).
>
> I put copy to Ignazio Palmisano, an excellent researcher and connoisseur of 
> the framework OWLAPI. With which I have been exchanging on this subject.
>
> Best regards.
>
>
> ----- Mensaje original -----
> De: "Dave Reynolds" <dave.e.reyno...@gmail.com>
> Para: users@jena.apache.org
> Enviados: Domingo, 19 de Marzo 2017 13:45:48
> Asunto: Re: [MASSMAIL]Re: about TDB JENA
>
> On 19/03/17 15:52, Manuel Enrique Puebla Martinez wrote:
>> I consider that I did not know how to explain correctly in my previous 
>> email, I repeat the two questions:
>>
>>
>> 1) I read the page https://jena.apache.org/documentation/tdb/assembler.html, 
>> I do not think it is what I need.
>>
>>    I work with large OWL2 ontologies from the OWLAPI framework, generated 
>> automatically. With thousands of individuals and more than 13 million 
>> property assertions (data and objects). As one may assume, one of the 
>> limitations I have is that OWLAPI itself can not manage these large 
>> ontologies, that is, because OWLAPI loads the whole owl file into RAM. Not 
>> to dream that some classical reasoner (Pellet, Hermit, etc.) can infer new 
>> knowledge about these great ontologies.
>>
>> Once explained the problem I have, comes the question: Does JENA solve this 
>> test ?, ie with JENA and TDB I can generate my great ontologies in OWL2 ?, 
>> With JENA and TDB I can use a reasoner to infer new implicit knowledge 
>> (unstated) on my big ontologies?
>>
>> I do not think JENA will be able to solve this problem, it would be a 
>> pleasant surprise for me. Unfortunately so far I had not read about TDB and 
>> the potentialities of JENA in external memory.
> Indeed Jena does not offer fully scalable reasoning, all inference is 
> done in memory.
>
> That said 13 million assertions is not *that* enormous, the cost of 
> inference depends on the complexity of the ontology as much its scale. 
> So 13m triples with some simple domain/range inferences might work in 
> memory.
>
> TDB storage itself scales just fine and querying does not load all the 
> data into memory. So if you don't actually need inference, or only need 
> simple inference that can be usefully expressed as part of the SPARQL 
> query then you are fine.
>
> Dave
>
> La @universidad_uci es Fidel. Los jóvenes no fallaremos.
> #HastaSiempreComandante
> #HastalaVictoriaSiempre
>
-- 
Lorenz Bühmann
AKSW group, University of Leipzig
Group: http://aksw.org - semantic web research center

Reply via email to