[
https://issues.apache.org/jira/browse/HDFS-11153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16057160#comment-16057160
]
DENG FEI commented on HDFS-11153:
---------------------------------
at least,should config dfs.client.failover.connection.retries.on.timeouts=1 as
default by not 0
> RPC Client detect address changed should reconnect immediately
> --------------------------------------------------------------
>
> Key: HDFS-11153
> URL: https://issues.apache.org/jira/browse/HDFS-11153
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ipc
> Affects Versions: 3.0.0-alpha1
> Reporter: DENG FEI
> Attachments: HDFS-1153.001.patch, stupid.png
>
>
> HA mode,the _*"ipc.client.connect.max.retries.on.timeouts"*_ and
> _*"ipc.client.connect.max.retries"*_ is set zero,but if met active NN's ip
> changed,it will detect the change,but won't reconnect because exceed the max
> retry times,after do 15 times failover and then throw connection or standby
> exception.
> maybe if found the address is changed,should reconnect immediately no matter
> the retry times limit.
> ----
> log is below:
> {noformat}
> 2016-11-16 17:00:20,844 (WARN org.apache.hadoop.ipc.Client 510): Address
> change detected. Old: *****:9000 New: XXXXX:9000
> 2016-11-16 17:01:09,893 (WARN org.apache.hadoop.ipc.Client 510): Address
> change detected. Old: *****::9000 New: XXXXX:9000
> 2016-11-16 17:01:09,893 (WARN
> org.apache.hadoop.io.retry.RetryInvocationHandler 118): Exception while
> invoking class
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo.
> Not retrying because failovers (15) exceeded maximum allowed (15)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]