[
https://issues.apache.org/jira/browse/HDFS-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HDFS-4699:
--------------------------------
Attachment: HDFS-4699.1.patch
This patch addresses multiple problems that were contributing to the
intermittent failures:
# When {{BlockReceiver}} gets an {{IOException}}, it tries to assess if the
error was related to disk or network, and if disk-related, calls
{{DiskChecker}}. This test triggers rapid NN failovers, so it's common to see
a mix of different kinds of network errors. The logic for detecting a network
error was incomplete and miscategorizing some network failures as disk related,
and triggering a huge flurry of {{DiskChecker}} activity. Particularly on
Windows, rapid calls to this can be sluggish, because it needs to fork a new
process. I've added logic to filter out TCP RST and anything related to a
{{java.nio.channels.SocketChannel}}.
# The test triggers rapid NN failovers. The client retry handling uses an
exponential backoff with a maximum delay of 15s between failover attempts.
Particularly on small VMs, I saw multiple failover attempts quickly rising to a
15s delay and sometimes causing the whole test to timeout. I've made a change
to set configuration to cap the failover delay to 1s.
# There is a polling loop that tries to wait up to 30s for lease recovery.
Even with the prior changes, I've observed that 30s isn't sufficient on a small
VM. After I increased this to 60s, I saw consistent successful test runs.
I've verified the test on both Mac and Windows.
> TestPipelinesFailover#testPipelineRecoveryStress fails sporadically
> -------------------------------------------------------------------
>
> Key: HDFS-4699
> URL: https://issues.apache.org/jira/browse/HDFS-4699
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 3.0.0
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-4699.1.patch
>
>
> I have seen {{TestPipelinesFailover#testPipelineRecoveryStress}} fail
> sporadically due to timeout during {{loopRecoverLease}}, which waits for up
> to 30 seconds before timing out.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira