[
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
James Clampffer updated HDFS-10441:
-----------------------------------
Attachment: HDFS-10441.HDFS-8707.014.patch
New patch to make sure failover count is actually incremented for all types of
failovers, IncrementFailoverCount implicitly resets the retry count as
[~bobhansen] suggested. I cited the wrong line of code with regards to retry
count last time, what I really meant was
{code}
for(unsigned int i=0; i<pendingRequests.size();i++)
pendingRequests[i]->IncrementFailoverCount();
{code}
That's been moved to the lower block that calls ConnectAndFlush.
Also changed the rpc attempts from hardcoded 3 to the default rpc_retry count
while I was in there.
Please let me know what you think. Thanks for all the reviews.
> libhdfs++: HA namenode support
> ------------------------------
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch,
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch,
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch,
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch,
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch,
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch,
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch,
> HDFS-10441.HDFS-8707.014.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]