Hi Paul,
On 12/10/12 14:09, Paul Taylor wrote:
Thank you for your answer and simple example,
You're welcome
and apologies for this late reply.
np
What I am actually doing is that I have an SDBStore that
is connected to a MySQL DB, the code uses SDBFactory to connect to a
named model in the SDBStore and returns a Model object, like that:
Model model = SDBFactory.connectNamedModel(store, modelName). Having
obtained the Model object I call FileManager.get().readModel(model,
sourceURL) to read a model from a file or URI. The URIs the code is
reading into the model are URIs at the schema level (RDFS or OWL).
For the purposes of my application I would like to perform some
simple form of reasoning over such a Model object, so I though to
create an OntoModel over my existing Model that will allow me to do
so, correct me if my understanding is wrong.
Yes, this is exactly analogous to the TDB example I put in the gist.
There is a 'however', however :) The reasoner makes many small queries
into the model. This works fine for an in-memory model, and works
tolerably well with TDB since TDB works hard to cache results in memory
as much as possible. With SDB, I don't believe that the caching is as
good. So while your design is fine in theory, in practice it may be a
little - or indeed a lot - slow.
There are two possible ways to solve this, which largely depend on the
size of your models and other factors (like how often your data
changes). The first is to compute the inference closure once at load
time, in memory, then write the whole model, including the inferred
triples, into SDB. That way you don't need inference when you query, but
while you can update the data, the updates (obviously) won't trigger any
inferences.
The second way is to load the contents of SDB into a memory model when
your app starts and connect that in-memory model to the reasoner. That
way you can update the model and get inferences (remember also to save
the updates to SDB as well).
The first recipe saves run-time computation, but the cost of updates is
potentially very high since you have to re-run the inference-and-save
process. The second recipe is more responsive to updates while keeping
the model data in a persistent way, but is limited to the size of your
app's memory. That trade-off can only be resolved with reference to
your user requirements.
The other way I could do
this is to create an OntoModel from the start, I am not sure, can I
do that?
You can, but it won't make any material difference to the end result.
Also, is there any other way to obtain an OntResource or
OntClass object and do some reasoning over the model that I have?
Well, you can use the reasoner without OntModel, and vice versa. But the
only(**) way to get inference closure is to use the reasoner.
Ian
(**) Actually this is not strictly true - Andy has a tool that will
perform a limited amount of RDFS inference closure as part of the I/O
toolchain. See:
http://jena.apache.org/documentation/io/riot.html#inference