We have two Zookeeper clusters of 3 servers each.

On one of the clusters, we have the DNS names for all three servers and it 
works fine. On the second cluster, A node will not rejoin the ensemble after 
being restarted unless I put 0.0.0.0 as the address of the current node.

I stop then later restart the Zookeeper service on the node which is the leader 
to simulate server failures.

 

Why do I need to put 0.0.0.0 for the address of the current node in zoo.cfg for 
one group of servers, but not the other? Is there anything I can do differently 
to solve this problem?

The only issue we have is the below error message in the zk_status section of 
the Solr admin website, everything else works correctly as long as the current 
server's address is 0.0.0.0



 

Below shows what my zoo.cfg files look like.

 

-- Node1 zoo.cfg --

tickTime=2000

dataDir=I:/zookeeper/data

clientPort=2181

4lw.commands.whitelist=mntr,conf,ruok

initLimit=5

syncLimit=2

server.1=0.0.0.0:2888:3888

server.2=NODE2DNSNAME.local:2888:3888

server.3=NODE3DNSNAME.local:2888:3888

 

-- Node2 zoo.cfg --

tickTime=2000

dataDir=I:/zookeeper/data

clientPort=2181

4lw.commands.whitelist=mntr,conf,ruok

initLimit=5

syncLimit=2

server.1=NODE1DNSNAME.local:2888:3888

server.2=0.0.0.0:2888:3888

server.3=NODE3DNSNAME.local:2888:3888

 

-- Node3 zoo.cfg --

tickTime=2000

dataDir=I:/zookeeper/data

clientPort=2181

4lw.commands.whitelist=mntr,conf,ruok

initLimit=5

syncLimit=2

server.1=NODE1DNSNAME.local:2888:3888

server.2=NODE2DNSNAME.local:2888:3888

server.3=0.0.0.0:2888:3888

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to