Ming Ma created HDFS-12285:
------------------------------
Summary: Better handling of namenode ip address change
Key: HDFS-12285
URL: https://issues.apache.org/jira/browse/HDFS-12285
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Ming Ma
RPC client layer provides functionality to detect ip address change:
{noformat}
Client.java
private synchronized boolean updateAddress() throws IOException {
// Do a fresh lookup with the old host name.
InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
server.getHostName(), server.getPort());
......
}
{noformat}
To use this feature, we need to enable retry via
{{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} RetryPolicy
will be used; which caused {{handleConnectionFailure}} to throw
{{ConnectException}} exception without retrying with the new ip address.
{noformat}
private void handleConnectionFailure(int curRetries, IOException ioe
) throws IOException {
closeConnection();
final RetryAction action;
try {
action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
} catch(Exception e) {
throw e instanceof IOException? (IOException)e: new IOException(e);
}
......
}
{noformat}
However, using such configuration isn't ideal. What happens is DFSClient still
has the cached the old ip address created by {{namenode =
proxyInfo.getProxy();}}. Then when a new rpc connection is created, it starts
with the old ip followed by retry with the new ip. It will be nice if DFSClient
can refresh namenode proxy automatically.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]