jbarefoot opened a new pull request #699: Fix so 
dfs.client.failover.max.attempts is respected correctly
URL: https://github.com/apache/hadoop/pull/699
 
 
   Without this change, you would get incorrect behavior in that you would 
always have to set `dfs.client.failover.max.attempts` to be +1 greater to have 
the desired behavior, e.g. if you want it to attempt failover exactly once, you 
would have to set `dfs.client.failover.max.attempts=2`.
   
   Without this change, if you set `dfs.client.failover.max.attempts=1`, to 
attempt to failover just once time, instead it wouldn't try at all, and you see 
this log message:
   ```
   org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking 
class 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo
 over hadoop-node-01.docker.infra.atscale.com/127.0.0.21:8020. Not retrying 
because failovers (1) exceeded maximum allowed (1)
   ```
   
   Note that the non-failover retries just below this change is correct.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to