On 20/08/13 14:53, Shapiro, Alexander wrote:
Unfortunately, we also have an old versions of jena - TDB version is 0.8.9 and
jena's version is 2.6.4.
So we do not have the StoreConnection class.
Do you remember how to close the model "right" in this old version?
January 2011. Err.
Something like TDBMaker.releaseDataset or maybe it's
TDBMaker.releaseLocation
I think you should check the source code as to what it does.
Andy
Alex
-----Original Message-----
From: Andy Seaborne [mailto:[email protected]]
Sent: Tuesday, August 20, 2013 15:26
To: [email protected]
Subject: Re: Two tdb instances using same data files
On 20/08/13 12:49, Shapiro, Alexander wrote:
Andy,
Thank you for your answer.
We have about 10,000,000 triples in the data files.
We will work on architecture changes.
Meanwhile, if we take the "close-reopen" way, what is the best way to
completely close the model?
StoreConnection.release(Location)
Andy
Thank you,
Alex
-----Original Message-----
From: Andy Seaborne [mailto:[email protected]]
Sent: Monday, August 19, 2013 16:06
To: [email protected]
Subject: Re: Two tdb instances using same data files
On 19/08/13 13:21, Shapiro, Alexander wrote:
Hi,
A while ago I've asked a question about two tdb instances using same data files
(see below).
Now I'm using the following code to open a model on both instances:
SystemTDB.setFileMode(FileMode.direct);
TDBMaker.setImplFactory(TDBMaker.uncachedFactory);
model = TDBFactory.createModel(shared_location);
The problem is that sometimes when I do some changes on one instance - they are
seen in another instance and sometimes are not.
So I have two questions -
1. How it is possible when I do not use any cache?
**** This is a very bad idea ****
**** Share a single data server ****
You are using per-dataset caches - uncachedFactory is just about caching of
built datasets in the same JVM, not about caching inside a dataset.
I'm not guaranteeing this wil work at all. It's madness to update files on
disk from two different JVMs and expect any kind of consistency or safety.
I suggested it might work to close a dataset on update (actually a sync is
sufficient) and have the other JVM close-and-reopen it. How you coordinate
that is hard and must be done very carefully. The other JVM can't be reading
the database between the update starting and being told to close-reopen.
If you are inconsistently seeing changes, then maybe one or more of:
(1) You are not
(2) You are on 64 bit (and not using transactions?)
Memory files may means that changes can be flushed
at any time.
(3) You are on 32 or 64 bit and the second JVM is reading at the
same time as the updating JVM.
You can't stop an inconsistent set of changes being seen in models in the
second JVM unless you coordinate a switch over.
2. Is there any way to solve this problem?
Changing the architecture of your application.
Share a single data server.
Anything else is unlikely to be anything other than an unreliable fudge.
-----------------
How much data do you have?
There are two designs that are good:
1/ SPARQL Query/Update to a shared Fuseki server.
2/ Use Graph Store Protocol to PUT/GET models from a shared server.
Andy
Thank you,
Alex
-----Original Message-----
From: Alex Shapiro [mailto:[email protected]]
Sent: Monday, March 11, 2013 16:58
To: '[email protected]'
Subject: RE: Two tdb instances using same data files
Thank you, Andy!
I will try this.
Alex
-----Original Message-----
From: Andy Seaborne [mailto:[email protected]] On Behalf
Of Andy Seaborne
Sent: Monday, March 11, 2013 15:46
To: [email protected]
Subject: Re: Two tdb instances using same data files
On 11/03/13 11:29, Alex Shapiro wrote:
Thank you Marco and Andy! I perfectly understand that changes made
in one JVM will not update the model in second JVM and that this is
in general a bad idea :-). We are working on changing the
architecture of our application. Meanwhile, let's say I know when
the update is done in one JVM and can notify second JVM about the
change - will it help to close the model in second JVM and reopen it
or reset the model somehow to get the changes made in first JVM?
Alex
"bad idea" is an understatement!
This might work:
Close the dataset and force it out of the dataset cache.
TDBMaker.releaseDataset
But it's probably better to uncached datasets in the first place:
TDBMake.setImplFactory(TDBMaker.uncachedFactory)
No guarantees whatsoever.
Andy