Dieter Maurer wrote:
We recently observed another ZODB cache inconsistency:
The commit of a huge transaction caused our ZEO server to be late
in responding to the HA monitoring probe. The HA monitor responded
with a SIGTERM targeted to the ZEO server. ZEO restarted.
The ZEO client performing the huge transaction reported an
error in the second phase of its commit state.
At that point, the transaction on this system should have been
aborted and the datamanager involved should have thrown away the
state. All of the objects involved in that long transactions should
have been invalidated.
The ZODB states of other ZEO clients were inconsitent:
some of them had received invalidation messages and saw
the objects modified by the huge transaction with their new
values. Others had not yet received the invalidation messages
and treated the objects as still unchanged.
That's odd. As Jeremy pointed out, when the ZEO server restarted,
the connections to the clients were lost. When the server came back up
and the clients reconnected, they should have validated their caches
and gotten the necessary invalidations then.
While disconnected, clients will serve data out of their caches,
but they should get new data once they have validated their cache.
A client can have old data, a client should never have inconsistent
This means that interrupting ZEO while it is sending invalidation messages
can cause inconsitent states in the ZODB caches of its clients.
What do you mean by inconsistent states? Do you mean inconsistent
between clients? Or do you mean that a client's data is inconsistent.
For example, if a transaction updates X and Y, are you suggesting that a
single client is seeing the update to X, but not the update to Y?
Jim Fulton mailto:[EMAIL PROTECTED] Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org