[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018482#comment-14018482
 ] 

MUFEED USMAN commented on ZOOKEEPER-1506:
-----------------------------------------

In the case where no IPs change, but a mere DNS outage occurs (for some 
unforeseen reason); shouldn't the ZKs be able to able to make use of the cached 
info and restore connectivity once the DNS is back up?

In the case I'm handling I faced the following:

2014-02-06 20:22:54,438 [myid:0] - INFO  
[WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 
0x200000171 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (
n.peerEPoch), LOOKING (my state)
2014-02-06 20:22:54,451 [myid:0] - WARN  
[WorkerSender[myid=0]:QuorumCnxManager@368] - Cannot open channel to 1 at 
election address node02:3888
java.net.UnknownHostException: node02

And only ZK restarts restored services.


> Re-try DNS hostname -> IP resolution if node connection fails
> -------------------------------------------------------------
>
>                 Key: ZOOKEEPER-1506
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1506
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: server
>    Affects Versions: 3.4.5
>         Environment: Ubuntu 11.04 64-bit
>            Reporter: Mike Heffner
>            Assignee: Michael Lasevich
>            Priority: Critical
>              Labels: patch
>             Fix For: 3.5.0
>
>         Attachments: ZOOKEEPER-1506.patch, ZOOKEEPER-1506.patch, 
> zk-dns-caching-refresh.patch
>
>
> In our zoo.cfg we use hostnames to identify the ZK servers that are part of 
> an ensemble. These hostnames are configured with a low (<= 60s) TTL and the 
> IP address they map to can and does change. Our procedure for 
> replacing/upgrading a ZK node is to boot an entirely new instance and remap 
> the hostname to the new instance's IP address. Our expectation is that when 
> the original ZK node is terminated/shutdown, the remaining nodes in the 
> ensemble would reconnect to the new instance.
> However, what we are noticing is that the remaining ZK nodes do not attempt 
> to re-resolve the hostname->IP mapping for the new server. Once the original 
> ZK node is terminated, the existing servers continue to attempt contacting it 
> at the old IP address. It would be great if the ZK servers could try to 
> re-resolve the hostname when attempting to connect to a lost ZK server, 
> instead of caching the lookup indefinitely. Currently we must do a rolling 
> restart of the ZK ensemble after swapping a node -- which at three nodes 
> means we periodically lose quorum.
> The exact method we are following is to boot new instances in EC2 and attach 
> one, of a set of three, Elastic IP address. External to EC2 this IP address 
> remains the same and maps to whatever instance it is attached to. Internal to 
> EC2, the elastic IP hostname has a TTL of about 45-60 seconds and is remapped 
> to the internal (10.x.y.z) address of the instance it is attached to. 
> Therefore, in our case we would like ZK to pickup the new 10.x.y.z address 
> that the elastic IP hostname gets mapped to and reconnect appropriately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to