Hi there, I posted a while back about a problem I was having that we
thought was related to caching of sequences. This bug would cause 2 app
servers who are doing inserts at the same time get the same key out of
OJB_HL_SEQ. This would then results in the 1st insert doing the actual
insert with that key, then the second app server would do an update,
overwriting the original inserted record.
We thought we had a solution by changing SequenceGenerator to flush the
cache before getting a new sequence (see below). However this didn't
actually fix the problem - it seems that even when I run OJB (I'm using
0.9.9, PB api) with the 'ObjectCacheEmptyImpl' class I still have the
problem! It looks like SQL server isn't locking the transactions
property, or I'm not setting the right isolation level in OJB. It
appears that 2 app servers grab out of the OJB_HL_SEQ table at exact
same time (looks like milliseconds differences), and they both get the
same key! This trickles down on the line of 2 objects with the same id,
the second one wins when OJB does an update over the original record.
Armin, do you have any ideas? It almost seems like even though a
transaction is in progress while SequenceGenerator is in progress, it
doesn't lock another application/instance from grabbing the next value
for that row!
I'm assuming the database should take care of this, or I have to
configure OJB to tell it to. SQL Server says it defaults to
READ_UNCOMMITED state. I thought I saw a setting for this in an older
OJB version, but it seems to be removed or moved somewhere. I see on
the repository-junit it has a setting on an Article class on
'isolation-level=read_uncommited' - does that mean I can adjust the
isolation level on OJB_HL_SEQ by editing it's entry in the
repository_internal.xml to look like below?
(for example)
---------
class-descriptor
class="org.apache.ojb.broker.util.sequence.HighLowSequence"
table="OJB_HL_SEQ"
isolation-level="read-committed"
I've head from the OPTA folks (inetsoftware.de) some examples, but I'm
not sure which OJB is doing is behind the scenes:
How to create a read lock on a record for update
There are no absolute read locks for the SQL Server. The reading
connection needs to support the read lock (transaction isolation level).
1. case
The writing connection started a transaction with:
setAutoCommit ( false );
After the first update of data there is a read lock if the reading
connection has an isolation level of TRANSACTION_READ_COMMITTED,
TRANSACTION_REPEATABLE_READ or TRANSACTION_SERIALIZABLE. The read lock
ends with the end of the transaction ( setAutoCommit (true), commit(),
rollback() ).
2. case
The writing connection has not started a transaction (
setAutoCommit(true) ) and create a JDBC(tm) 2.0 statement with:
createStatement(ResultSet.TYPE_SCROLL_SENSITIVE
,ResultSet.CONCUR_UPDATABLE+1);
After the first update of data there is a read lock if the reading
connection has an isolation level of TRANSACTION_SERIALIZABLE. The read
lock ends if the ResultSet scrolls to the next fetch block (
setFetchSize() ).
Ryan
------------------------------------------------------------------------
---
Indeed that's a better workaround for the problem.
But the same problems (you have with HighLowSequence)
could occur with any other persistent capable object.
Say server1 and server2 cache object A, now server1
update A-->A'. Then server2 do a select for A and found
A not A'. Server2 never will get the update version of A.
Thus you need a distributed cache (JCS or a JMS based
in J2EE compilant environments), or a cache only exists
e.g. for a transaction, or as long as you do close the PersistenceBroker
instance, or a cache implementation remove objects after a specific time
period.
regards,
Armin
----- Original Message -----
From: "Ryan Vanderwerf" <[EMAIL PROTECTED]>
To: "OJB Users List" <[EMAIL PROTECTED]>; "Armin Waibel"
<[EMAIL PROTECTED]>
Sent: Thursday, February 20, 2003 6:25 PM
Subject: RE: is this an old .97 bug? (sequencing) - happens in 0.9.9 too
I've put the following fix in that isn't quite as much of a performance
drain in SequenceGenerator.java, around line 47:
broker.beginTransaction();
// temp fix for sequence caching problem
newSequence = (HighLowSequence) broker.getObjectByQuery(q);
broker.removeFromCache(newSequence);
I threw quite a bit of load and it and it still seems to stop OJB from
corrupting it's own inserts and updates.
Ryan