> During high availability cluster checks we observed that
> "ZEOClientStorage"'s approach to detect connection loss to its server is
A couple things:
- What do you think that approach is? Seriously. The only gimmick
I'm aware of (I may be missing some!) is that if asyncore invokes
handle_error(), a chain of snaky calls ends up in ZEO/zrpc/client.py's
ConnectionManager.close_conn(), which tries to connect again.
- How do you define reliable? Also seriously. I'm not aware of
any way to know for sure whether a remote connection is dead,
or just temporarily unresponsive.
> It relies on the fact that the operating system reports a lost
Is there something else it could/should rely on?
> However, by default, TCP does not garantee any notice for broken
> connections. While, usually, the OS can inform the communication
> endpoints, there are essential cases where this is not the case:
> network and processor outages.
> In our specific case, one of the two ZEO cluster nodes was switch off
> for testing purposes. As expected, the other cluster node took over
> the ZEO service. However, one of our ZEO clients did not notice that it
> lost the server connection and happily worked with stale ZODB data
> (from its caches) for days. Of course, it did not try to write ZODB
> data (otherwise, it had noticed the lost connection).
OK, so maybe that's what ZEO should do itself: send messages between client
and server "periodically", regardless of whether data "needs" to be
exchanged? If that would count as reliable, I would like to pursue that.
> Probably, "ZEOClientStorage" (and the ZEO server) should use
> "SO_KEEPALIVE" to enable TCP keepalive messages. However, the default
> TCP timeouts are probably too high (2 hours) for many ZODB applications
> (like ours).
Section 22.214.171.124 of RFC 1122 doesn't read like it thinks keep-alives are
either portable or appropriate for this. In particular,
It is extremely important to remember that ACK segments that
contain no data are not reliably transmitted by TCP.
Consequently, if a keep-alive mechanism is implemented it
MUST NOT interpret failure to respond to any specific probe
as a dead connection.
Various other Googling strongly suggests there's no portable way to change
the keep-alive interval either (not even across systems that support it).
Besides, if 2 hours is too high anyway (and 2 hours is just the _minimum_
default in RFC 1122; vendors are free to set it higher than that), is there
a real point to messing with SO_KEEPALIVE at all?
If introducing "artificial" ZEO communication could achieve a better result,
and in a tunable, portable way, I'd much rather do that. I suspect there's
another use case for this too: some people run behind obnoxious firewalls,
some of which break a connection if it's been idle for "too long". I don't
think ZEO notices that either now.
> I will implement an application specific keep alive mechanism.
How will you do that? Would it be appropriate for ZEO to do something
similar itself? If ZEO did, would you still need to bother?
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org