[ 
https://issues.apache.org/jira/browse/HDFS-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-12285:
---------------------------
    Description: 
RPC client layer provides functionality to detect ip address change:

{noformat}
Client.java
    private synchronized boolean updateAddress() throws IOException {
      // Do a fresh lookup with the old host name.
      InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
                               server.getHostName(), server.getPort());
    ......
    }
{noformat}

To use this feature, we need to enable retry via 
{{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} RetryPolicy 
will be used; which caused {{handleConnectionFailure}} to throw 
{{ConnectException}} exception without retrying with the new ip address.

{noformat}
    private void handleConnectionFailure(int curRetries, IOException ioe
        ) throws IOException {
      closeConnection();

      final RetryAction action;
      try {
        action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
      } catch(Exception e) {
        throw e instanceof IOException? (IOException)e: new IOException(e);
      }
  ......
  }
{noformat}


However, using such configuration isn't ideal. What happens is DFSClient still 
holds onto the cached old ip address created by {{namenode = 
proxyInfo.getProxy();}}. Thus when a new rpc connection is created, it starts 
with the old ip followed by retry with the new ip. It will be nice if DFSClient 
can update namenode proxy automatically upon ip address change.

  was:
RPC client layer provides functionality to detect ip address change:

{noformat}
Client.java
    private synchronized boolean updateAddress() throws IOException {
      // Do a fresh lookup with the old host name.
      InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
                               server.getHostName(), server.getPort());
    ......
    }
{noformat}

To use this feature, we need to enable retry via 
{{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} RetryPolicy 
will be used; which caused {{handleConnectionFailure}} to throw 
{{ConnectException}} exception without retrying with the new ip address.

{noformat}
    private void handleConnectionFailure(int curRetries, IOException ioe
        ) throws IOException {
      closeConnection();

      final RetryAction action;
      try {
        action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
      } catch(Exception e) {
        throw e instanceof IOException? (IOException)e: new IOException(e);
      }
  ......
  }
{noformat}


However, using such configuration isn't ideal. What happens is DFSClient still 
has the cached the old ip address created by {{namenode = 
proxyInfo.getProxy();}}. Then when a new rpc connection is created, it starts 
with the old ip followed by retry with the new ip. It will be nice if DFSClient 
can refresh namenode proxy automatically.


> Better handling of namenode ip address change
> ---------------------------------------------
>
>                 Key: HDFS-12285
>                 URL: https://issues.apache.org/jira/browse/HDFS-12285
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ming Ma
>
> RPC client layer provides functionality to detect ip address change:
> {noformat}
> Client.java
>     private synchronized boolean updateAddress() throws IOException {
>       // Do a fresh lookup with the old host name.
>       InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
>                                server.getHostName(), server.getPort());
>     ......
>     }
> {noformat}
> To use this feature, we need to enable retry via 
> {{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} 
> RetryPolicy will be used; which caused {{handleConnectionFailure}} to throw 
> {{ConnectException}} exception without retrying with the new ip address.
> {noformat}
>     private void handleConnectionFailure(int curRetries, IOException ioe
>         ) throws IOException {
>       closeConnection();
>       final RetryAction action;
>       try {
>         action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
>       } catch(Exception e) {
>         throw e instanceof IOException? (IOException)e: new IOException(e);
>       }
>   ......
>   }
> {noformat}
> However, using such configuration isn't ideal. What happens is DFSClient 
> still holds onto the cached old ip address created by {{namenode = 
> proxyInfo.getProxy();}}. Thus when a new rpc connection is created, it starts 
> with the old ip followed by retry with the new ip. It will be nice if 
> DFSClient can update namenode proxy automatically upon ip address change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to