[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16880877#comment-16880877
 ] 

maoling commented on ZOOKEEPER-3456:
------------------------------------

[~Mar_zieh] 

At the first, I think you should *ping* or *telnet* the server2 from other node 
to check the network issue

> Service temporarily unavailable due to an ongoing leader election. Please 
> refresh
> ---------------------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-3456
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3456
>             Project: ZooKeeper
>          Issue Type: Bug
>          Components: server
>         Environment: docker container with Ubuntu 16.04
>            Reporter: Marzieh
>            Priority: Major
>             Fix For: 3.4.14
>
>
> Hi
> I configured Zookeeper with four nodes for my Mesos cluster with Marathon. 
> When I ran Flink Json file on Marathon, it was run without problem. But, when 
> I entered IP of my two slaves, just one slave shew Flink UI and another slave 
> shew this error:
>  
> Service temporarily unavailable due to an ongoing leader election. Please 
> refresh
> I checked "zookeeper.out" file and it said that :
>  
> 019-07-07 11:48:43,412 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading 
> configuration from: /home/zookeeper-3.4.14/bin/../conf/zoo.cfg
> 2019-07-07 11:48:43,421 [myid:] - INFO [main:QuorumPeer$QuorumServer@185] - 
> Resolved hostname: 0.0.0.0 to address: /0.0.0.0
> 2019-07-07 11:48:43,421 [myid:] - INFO [main:QuorumPeer$QuorumServer@185] - 
> Resolved hostname: 10.32.0.3 to address: /10.32.0.3
> 2019-07-07 11:48:43,422 [myid:] - INFO [main:QuorumPeer$QuorumServer@185] - 
> Resolved hostname: 10.32.0.2 to address: /10.32.0.2
> 2019-07-07 11:48:43,422 [myid:] - INFO [main:QuorumPeer$QuorumServer@185] - 
> Resolved hostname: 10.32.0.5 to address: /10.32.0.5
> 2019-07-07 11:48:43,422 [myid:] - WARN [main:QuorumPeerConfig@354] - 
> Non-optimial configuration, consider an odd number of servers.
> 2019-07-07 11:48:43,422 [myid:] - INFO [main:QuorumPeerConfig@398] - 
> Defaulting to majority quorums
> 2019-07-07 11:48:43,425 [myid:3] - INFO [main:DatadirCleanupManager@78] - 
> autopurge.snapRetainCount set to 3
> 2019-07-07 11:48:43,425 [myid:3] - INFO [main:DatadirCleanupManager@79] - 
> autopurge.purgeInterval set to 0
> 2019-07-07 11:48:43,425 [myid:3] - INFO [main:DatadirCleanupManager@101] - 
> Purge task is not scheduled.
> 2019-07-07 11:48:43,432 [myid:3] - INFO [main:QuorumPeerMain@130] - Starting 
> quorum peer
> 2019-07-07 11:48:43,437 [myid:3] - INFO [main:ServerCnxnFactory@117] - Using 
> org.apache.zookeeper.server.NIOServerCnxnFactory as server connect$
> 2019-07-07 11:48:43,439 [myid:3] - INFO [main:NIOServerCnxnFactory@89] - 
> binding to port 0.0.0.0/0.0.0.0:2181
> 2019-07-07 11:48:43,440 [myid:3] - ERROR [main:QuorumPeerMain@92] - 
> Unexpected exception, exiting abnormally
> java.net.BindException: Address already in use
>  at sun.nio.ch.Net.bind0(Native Method)
>  at sun.nio.ch.Net.bind(Net.java:433)
>  at sun.nio.ch.Net.bind(Net.java:425)
>  at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>  at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>  at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>  at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:90)
>  at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:133)
>  at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:114)
>  at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
>  
> I searched a lot and could not find the solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to