Ming Ma created HDFS-12285:

             Summary: Better handling of namenode ip address change
                 Key: HDFS-12285
                 URL: https://issues.apache.org/jira/browse/HDFS-12285
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Ming Ma

RPC client layer provides functionality to detect ip address change:

    private synchronized boolean updateAddress() throws IOException {
      // Do a fresh lookup with the old host name.
      InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
                               server.getHostName(), server.getPort());

To use this feature, we need to enable retry via 
{{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} RetryPolicy 
will be used; which caused {{handleConnectionFailure}} to throw 
{{ConnectException}} exception without retrying with the new ip address.

    private void handleConnectionFailure(int curRetries, IOException ioe
        ) throws IOException {

      final RetryAction action;
      try {
        action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
      } catch(Exception e) {
        throw e instanceof IOException? (IOException)e: new IOException(e);

However, using such configuration isn't ideal. What happens is DFSClient still 
has the cached the old ip address created by {{namenode = 
proxyInfo.getProxy();}}. Then when a new rpc connection is created, it starts 
with the old ip followed by retry with the new ip. It will be nice if DFSClient 
can refresh namenode proxy automatically.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to