[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13021695#comment-13021695
 ] 

Mahadev konar commented on ZOOKEEPER-1049:
------------------------------------------

Ben and I looked at the man page of socket and here is what it says:

{noformat}
SO_LINGER
Sets or gets the SO_LINGER option. The argument is a linger structure.
struct linger {
    int l_onoff;    /* linger active */
    int l_linger;   /* how many seconds to linger for */
};
When enabled, a close(2) or shutdown(2) will not return until all queued 
messages for the socket have been successfully sent or the linger timeout has 
been reached. Otherwise, the call returns immediately and the closing is done 
in the background. When the socket is closed as part of exit(2), it always 
lingers in the background.
{noformat}

So, since we have a single thread that calls close() and IO on sockets, it is 
possible that this can cause a huge delay in IO for other sockets. Anyone else 
hasmore information on this please feel free to update!






> Session expire/close flooding renders heartbeats to delay significantly
> -----------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-1049
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1049
>             Project: ZooKeeper
>          Issue Type: Bug
>          Components: server
>    Affects Versions: 3.3.2
>         Environment: CentOS 5.3, three node ZK ensemble
>            Reporter: Chang Song
>            Priority: Critical
>         Attachments: ZookeeperPingTest.zip, zk_ping_latency.pdf
>
>
> Let's say we have 100 clients (group A) already connected to three-node ZK 
> ensemble with session timeout of 15 second.  And we have 1000 clients (group 
> B) already connected to the same ZK ensemble, all watching several nodes 
> (with 15 second session timeout)
> Consider a case in which All clients in group B suddenly hung or deadlocked 
> (JVM OOME) all at the same time. 15 seconds later, all sessions in group B 
> gets expired, creating session closing stampede. Depending on the number of 
> this clients in group B, all request/response ZK ensemble should process get 
> delayed up to 8 seconds (1000 clients we have tested).
> This delay causes some clients in group A their sessions expired due to delay 
> in getting heartbeat response. This causes normal servers to drop out of 
> clusters. This is a serious problem in our installation, since some of our 
> services running batch servers or CI servers creating the same scenario as 
> above almost everyday.
> I am attaching a graph showing ping response time delay.
> I think ordering of creating/closing sessions and ping exchange isn't 
> important (quorum state machine). at least ping request / response should be 
> handle independently (different queue and different thread) to keep 
> realtime-ness of ping.
> As a workaround, we are raising session timeout to 50 seconds.
> But this causes max. failover of cluster to significantly increased, thus 
> initial QoS we promised cannot be met.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to