See below...

----- Original Message -----
From: "max" <[EMAIL PROTECTED]>
To: "OJB Users List" <[EMAIL PROTECTED]>
Sent: Thursday, September 12, 2002 2:32 PM
Subject: Re: per thread caching [was: Re: Anyone up for a "challange" ? :)]


> > > How can he control that another thread does not query for one
> > > of the objects
> > > he has loaded ?
> > >
> > > example:
> > >
> > > TX1 starts               (cache: <empty>)
> > > TX1: finds A           (cache: A )
> > > TX2: starts              (cache: A )
> > > TX1: modifies A      (cache: A' )
> > > TX2: finds A           (cache: A' )   ...TX2 now have a
> > > direct reference to
> > > A'  that cannot be changed by OJB
> > > TX1: cancel transaction (cache: ? (I guess: A'))
> > > TX2: commits something based on A' even though it should have been A
> > >
> > > How do you ensure that this does not happen with a singleton
> > > cachestrategy ?
> >
> > You need to prevent this situation yourself.
> >
> > If a transaction is canceled this is what I currently do to prevent
> > data-integrity problems:
> >
> >    1. Clear the cache.
> >    2. Refresh the state of all objects involved in the transaction from
> the
> > database. I do this in the implementation of
> TransactionAware.afterAbort().
> > OJB automatically calls this method when an ODMG transaction is
> > cancelled/aborted. Your persistent objects need to implement the
> > TransactionAware interface.
> >
> > Step #1 prevents invalid data from being re-read in step #2. Step #2
> ensures
> > that TX2 does not read invalid data. For this to be true, though, you
need
> > to set your transaction isolation level to 'read-committed'.
>
> Ok - I do not understand how step #1 and #2 can prevent TX2 from getting a
> hold on A' ?

Step #1 doesn't prevent TX2 from get a hold on A'. It just ensures that when
transaction TX2 attempts to read/write lock the object, it doesn't
read/write invalid data. TX2 must wait until the object has been refreshed
from the database before it can read or write the object's data.

In other words, OJB releases an object lock only after
TransactionAware.afterAbort() has returned.


> It is because read-commited will not allow me to read modified objects ?
If
> that is required then
> the performance would be degraded as each thread working on the same
object
> has to wait for a read lock even though it could just get a fresh copy
from
> the database which is still valid (because thread 2 has not yet commited)
if
> just the transaction had its own cache! Then there would be no such
problems
> :)

This would be an interesting test to conduct! Is it more performant to wait
for a 'short' transaction to complete or to query and build a new object
instance from the database? What are the factors that influence one strategy
over another: # concurrent threads, object graph size, networked vs.
in-memory database, etc?


>
> > IMO, you only need to clear the cache if a transaction aborts. If you
> handle
> > the above situation properly, you can still leverage a single, global
> cache.
> >
> >
> > >
> > > > If you use the ODMG layer, it does the work of committing
> > > all changed
> > > > objects (since by write-locking an Object it is registered for later
> > > > update to the db)
> > >
> > > Yes, but does the LockManager removes those objects that is
> > > cancellede ?
> >
> > OJB DOES remove the aborted objects from the cache. Not the LockManager,
> > though. It is the OJB classes that implement ModificationState that
remove
> > aborted objects from the cache (if the modification state is not clean).
>
> Yes - but this does not remove the other transaction reference to those
> objects, right?!

Right. That's why you need to handle the rollback yourself with this version
of OJB. Maybe at some future time OJB could transparently handle the
rollback by using reflection or getters/setters.

Looks like someone started to implement this in ObjectEnvelope.setFields()
but this method isn't currently used...


> And this requirment of implementing ModificationState is not good :(

Yeh, I understand. If someone has a more transparent, workable solution than
guideline #4, I'd like to know.

>
> > When a transaction aborts, though, I clear the cache (step #1 above)
just
> to
> > ensure that I'm truly refreshing all objects and relationships from the
> > database.
>
> Here I totally agree .... just don't find it very attractive to clear the
> cache as the objects might be used in other ongoing transactions!! (or are
> you just removing the objects touched/read/written in the failed
transaction
> ? - if yes, how about related transactions ? Are they aborted too ?)

I clear the cache to ensure that I get clean data from the database. I don't
want to attempt to refresh an object or object reference and have it read
from a stale cache.

Other transactions shouldn't be affected as they have their own references
to their transacted objects. Other transactions have to wait for the aborted
transaction to refresh it's objects data and then release it's write locks
before they can read/write lock the same objects. In this manner
transactions are isolated and you don't need to abort other concurrent
transactions.





--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to