> I think you are right.
Yes, definitely.
As usual, the main problem here is that the "quick and dirty caching
algorithm" was written a long time before transaction support was added.
> A quick solution would be to invalidate the cache in
> case of a rollback, but there might be an additional problem, at least we
> had it in our cache algorithm (an object was present in two delta caches
and
> modified within two transactions).
Can anything worse than a dirty read occur ?
As soon as one of the transaction is rollbacked, the cache would be flushed,
and so we should be fine the next time the object is read.
> In our stores we implemented our own caches, for the Standard store caches
> were not limited in size. We re-used an existing in-house technology
called
> delta caches, where we have a global cache containing all the unchanged
> objects and a series of delta caches, each for one transaction. In case of
a
> commit, the delta cache is copied to the global cache, in case of abort
the
> delta cache is discarded. Each time a transaction wants to modify an
object,
> it is either read from the delta cache, or transported there from the
global
> cache. The transportation method must ensure, that an object is
transported
> to only one delta cache. Unfortunately our delta cache code is child
> dependant.
Interesting. We will need a cache implementation which would be transaction
aware eventually.
Remy