On Tue, Feb 15, 2011 at 10:21 AM, Dave Reynolds
<[email protected]>wrote:
> On Tue, 2011-02-15 at 09:54 +0100, Ingo Roderfeld wrote:
> > Hi,
> >
> > I'm trying to search in (and manipulate) a large ontology that is stored
> > in a TDB. At first I'm fetching the Jena model from the TDB with the
> command
> >
> > Model tdbModel = TDBFactory.createModel("/some/path/to/NCI-Thesaurus");
> >
> > I than want to convert it into an OntModel, because of the convenience
> > functions this model offers:
> >
> > OntModel ontModel = ModelFactory.createOntologyModel(
> > OntModelSpec.OWL_DL_MEM_RDFS_INF, tdbModel);
>
> The reasoners pull all the data they are working with, and more, into
> memory.
>
> Use a plain no-inference OntModelSpec such as OWL_MEM.
>
> If you need some inference closure then it is best to create that at
> ingest time for such large models. TDB does have some support for RDFS
> closure.
>
But is it possible to store both base model and inferenced data in TDB (or
SDB) instead of base in TDB + inferenced data in memory that OWL_MEM* specs
provide? Even if that means storing lots of data, that is still better than
crashing with OutOfMemoryError.
Mikhail