[ 
https://issues.apache.org/jira/browse/HDFS-11153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-11153:
----------------------------
    Description: 
HA mode,the _*"ipc.client.connect.max.retries.on.timeouts"*_ and 
_*"ipc.client.connect.max.retries"*_ is set zero,but if met active NN's ip 
changed,it will detect the change,but won't reconnect because exceed the max 
retry times,after do 15 times failover and then throw connection or standby 
exception.
maybe if found the address is changed,should reconnect immediately no matter 
the retry times limit.
----
log is below:
{noformat}
2016-11-16 17:00:20,844 (WARN org.apache.hadoop.ipc.Client 510): Address change 
detected. Old: *****:9000 New: XXXXX:9000
2016-11-16 17:01:09,893 (WARN org.apache.hadoop.ipc.Client 510): Address change 
detected. Old: *****::9000 New: XXXXX:9000
2016-11-16 17:01:09,893 (WARN org.apache.hadoop.io.retry.RetryInvocationHandler 
118): Exception while invoking class 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo.
 Not retrying because failovers (15) exceeded maximum allowed (15)
{noformat}

  was:
HA mode,the _*"ipc.client.connect.max.retries.on.timeouts"*_ and 
_*"ipc.client.connect.max.retries"*_ is set zero,but if met active NN's ip 
changed,it will detect the address is changed,but won't reconnect because 
exceed the max retry times,after do 15 times failover and then throw connection 
or standby exception.
maybe if found the address is changed,should reconnect immediately no matter 
the retry times limit.
----
log is below:
{noformat}
2016-11-16 17:00:20,844 (WARN org.apache.hadoop.ipc.Client 510): Address change 
detected. Old: *****:9000 New: XXXXX:9000
2016-11-16 17:01:09,893 (WARN org.apache.hadoop.ipc.Client 510): Address change 
detected. Old: *****::9000 New: XXXXX:9000
2016-11-16 17:01:09,893 (WARN org.apache.hadoop.io.retry.RetryInvocationHandler 
118): Exception while invoking class 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo.
 Not retrying because failovers (15) exceeded maximum allowed (15)
{noformat}


> RPC Client detect address changed should reconnect immediately
> --------------------------------------------------------------
>
>                 Key: HDFS-11153
>                 URL: https://issues.apache.org/jira/browse/HDFS-11153
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 3.0.0-alpha1
>            Reporter: DENG FEI
>
> HA mode,the _*"ipc.client.connect.max.retries.on.timeouts"*_ and 
> _*"ipc.client.connect.max.retries"*_ is set zero,but if met active NN's ip 
> changed,it will detect the change,but won't reconnect because exceed the max 
> retry times,after do 15 times failover and then throw connection or standby 
> exception.
> maybe if found the address is changed,should reconnect immediately no matter 
> the retry times limit.
> ----
> log is below:
> {noformat}
> 2016-11-16 17:00:20,844 (WARN org.apache.hadoop.ipc.Client 510): Address 
> change detected. Old: *****:9000 New: XXXXX:9000
> 2016-11-16 17:01:09,893 (WARN org.apache.hadoop.ipc.Client 510): Address 
> change detected. Old: *****::9000 New: XXXXX:9000
> 2016-11-16 17:01:09,893 (WARN 
> org.apache.hadoop.io.retry.RetryInvocationHandler 118): Exception while 
> invoking class 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo.
>  Not retrying because failovers (15) exceeded maximum allowed (15)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to