[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471606#comment-16471606
 ] 

Akira Ajisaka edited comment on HADOOP-15449 at 5/11/18 7:42 AM:
-----------------------------------------------------------------

+1 for extending the timeout. We observed the same issue.


was (Author: ajisakaa):
+1. We observed the same issue.

> ZK performance issues causing frequent Namenode failover 
> ---------------------------------------------------------
>
>                 Key: HADOOP-15449
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15449
>             Project: Hadoop Common
>          Issue Type: Wish
>          Components: common
>    Affects Versions: 2.7.4
>            Reporter: Karthik Palanisamy
>            Assignee: Karthik Palanisamy
>            Priority: Critical
>         Attachments: HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 10000 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to