[
https://issues.apache.org/jira/browse/ZOOKEEPER-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13891894#comment-13891894
]
Germán Blanco commented on ZOOKEEPER-1869:
------------------------------------------
Hello Deepak,
yes, I looked at the logs.
It could be a zookeeper bug, caused by network instability. As indicated in my
previous post (see the second post in this JIRA), the code reaches a point that
I haven´t seen until now in other logs, and the log indicates that the server
will not be listening on FLE ports any longer. If we can verify that your
problem happens when that log is produced, then we have located the bug. Maybe
you could try to reproduce the situation of when the problem happened or maybe
we could generate a zookeeper jar that jumps to that part of the code under
some condition. If you then reach the same state as in your initial problem, we
know what we have to fix. Do you think you could please do that?
As said, the root cause of the problem seems to be network instability. By
looking at the logs, I can see a lot of indications of something that is not
going nicely with connections, I can't tell if that is something that you can
solve or not in your environment. But that doesn't change the fact that there
might be something to correct in ZooKeeper.
Regards,
German.
> zk server falling apart from quorum due to connection loss and couldn't
> connect back
> ------------------------------------------------------------------------------------
>
> Key: ZOOKEEPER-1869
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1869
> Project: ZooKeeper
> Issue Type: Bug
> Components: quorum
> Affects Versions: 3.5.0
> Environment: Using CentOS6 for running these zookeeper servers
> Reporter: Deepak Jagtap
> Priority: Critical
>
> We have deployed zookeeper version 3.5.0.1515976, with 3 zk servers in the
> quorum.
> The problem we are facing is that one zookeeper server in the quorum falls
> apart, and never becomes part of the cluster until we restart zookeeper
> server on that node.
> Our interpretation from zookeeper logs on all nodes is as follows:
> (For simplicity assume S1=> zk server1, S2 => zk server2, S3 => zk server 3)
> Initially S3 is the leader while S1 and S2 are followers.
> S2 hits 46 sec latency while fsyncing write ahead log and results in loss of
> connection with S3.
> S3 in turn prints following error message:
> Unexpected exception causing shutdown while sock still open
> java.net.SocketTimeoutException: Read timed out
> Stack trace
> ******* GOODBYE /169.254.1.2:47647(S2) ********
> S2 in this case closes connection with S3(leader) and shuts down follower
> with following log messages:
> Closing connection to leader, exception during packet send
> java.net.SocketException: Socket close
> Follower@194] - shutdown called
> java.lang.Exception: shutdown Follower
> After this point S3 could never reestablish connection with S2 and leader
> election mechanism keeps failing. S3 now keeps printing following message
> repeatedly:
> Cannot open channel to 2 at election address /169.254.1.2:3888
> java.net.ConnectException: Connection refused.
> While S3 is in this state, S2 repeatedly keeps printing following message:
> INFO
> [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296]
> - Accepted socket connection from /127.0.0.1:60667
> Exception causing close of session 0x0: ZooKeeperServer not running
> Closed socket connection for client /127.0.0.1:60667 (no session established
> for client)
> Leader election never completes successfully and causing S2 to fall apart
> from the quorum.
> S2 was out of quorum for almost 1 week.
> While debugging this issue, we found out that both election and peer
> connection ports on S2 can't be telneted from any of the node (S1, S2, S3).
> Network connectivity is not the issue. Later, we restarted the ZK server S2
> (service zookeeper-server restart) -- now we could telnet to both the ports
> and S2 joined the ensemble after a leader election attempt.
> Any idea what might be forcing S2 to get into a situation where it won't
> accept any connections on the leader election and peer connection ports?
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)