[
https://issues.apache.org/jira/browse/HADOOP-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HADOOP-8068:
--------------------------------
Attachment: hadoop-8068.txt
Attached patch fixes the issue:
- removes the TRY_ONCE_DONT_FAIL retry action, since it was unused and doesn't
make sense (why would you ever want to fail but swallow the exception?)
- changes the existing retry policies that rethrow the exception so that they
instead return RetryAction.FAIL
- changes the logging so that the retry policy passes a "reason" for the fail
back up to RetryInvocationHandler
- fixes the bug in RetryInvocationHandler that was swallowing exceptions thrown
for void types
We will also test this on a cluster to verify that it solves the issues
> HA: void methods can swallow exceptions when going through failover path
> ------------------------------------------------------------------------
>
> Key: HADOOP-8068
> URL: https://issues.apache.org/jira/browse/HADOOP-8068
> Project: Hadoop Common
> Issue Type: Bug
> Components: ha, ipc
> Affects Versions: HA Branch (HDFS-1623)
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Blocker
> Attachments: hadoop-8068.txt
>
>
> While running through scale testing, we saw an issue where clients were
> getting LeaseExpiredExceptions. We eventually tracked it down to the fact
> that some {{create}} calls had timed out (having been sent to a NN just
> before it crashed) but the resulting exception was swallowed by the retry
> policy code paths.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira