Chris Raber wrote:
>
> Assaf,
>
> I'll address the alternative approach you outlined in your other post in
> detail (time allowing). In short, database locks are only good for the time
> span for which we hold onto a database connection. In the world of
> datasource connection pooling, locks are are usually only good within short
> transaction boundaries. To handle concurrency detection for a short
> transaction model (where the user is outide transaction during think time),
> soft locking is required (e.g. by using timestamps, counters, state
> comparisons...). I assume this is what you mean by "dirty checking".
This has nothing to do with connection pooling, you can hold onto a
connection for as long as you hold onto the transaction, and you can
hold onto the transaction (and connection) across multiple invocations
on a stateful session beans. (Not that I recommend that anyone do that).
So the question is how do you perform updates in memory not within a
database transactions. I have the code to do that, but I couldn't find
anything in the EJB or ODMG API that allows me to do that. Once you're
outside the context of a transaction, how do you lock the objects.
I can easily add an API for that, but as long as it's not industry
standard, I'm going to stay clear of that.
> An issue I see with dirty checking as you have suggested it is that every
> time we want to read from cache we have to also read from the datasource of
> record to check cache coherency. The cost of the coherency check largely
No. Dirty checking works at store time, it's optimistic locking model.
If you want to make sure your object is synchronized before starting to
work with it, forget about soft transactions. Work with your EJB in the
context of a transaction, specify exclusive access, and dirty checking
will never happen.
If you are performing operations outside of transaction boundary,
there's no alternative to dirty checking.
I know there are solutions in between these two extremes, but they don't
yield easily to a CMP model.
> outweighs the benefit of caching in the first place. So in the case where
Multiple reads with little concurrency or multiple reads with seldom
writes yields very well to the dirty checking model. Especially if you
rely on a stamp that is provided at load/store time, checked at store
time. Caching them works to reduce loading and improves performance.
If you have high concurrency you want to avoid the
TransactionAborted/ObjectModified scenario, so you get into locking and
the caching engine becomes useless.
> there are applications writing the datasource without going though the cache
> (e.g. they are not playing our write-through game with us), it is probably
> not as worthwhile to cache in the first place, unless some amount of
> staleness is acceptable.
Session beans can modify the same data as entity beans through direct
JDBC. I expect that to happen, I plan for that.
But all of these arguments don't explain to me why you need a shared
cache. I don't see where the shared cache comes into play, and I don't
where it gives you any transactional assurance. You can prevent two
servers form affecting the same bean, but you cannot avoid the database
being updated concurrently.
arkin
>
> "There are a hundred ways to skin the cat". Hopefully I have not offended
> any cat lovers today... No particular approach is 100% correct, it's a
> question of best fit for particular requirements.
>
> Regards,
>
> -Chris.
--
----------------------------------------------------------------------
Assaf Arkin www.exoffice.com
CTO, Exoffice Technologies, Inc. www.exolab.org
===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST". For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".