[
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16605018#comment-16605018
]
Kitti Nanasi commented on HDFS-13882:
-------------------------------------
Thanks for reviewing it [~shwetayakkali]!
I looked into the test failures and TestAddStripedBlocks#testAddUCReplica does
fail because patch v001, because in DFSOutputStream#completeFile the sleep time
between retries is multiplied by 2 at every attempt starting from 400 ms, and
now that the default number of retries has increased to 10, the waiting time
has increased too much. I will upload a patch which increases the sleep time at
every attempt with only the default delay.
The other tests doe not seem relevant to this modification and pass locally.
> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to
> 10
> -------------------------------------------------------------------------------
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.1.0
> Reporter: Kitti Nanasi
> Assignee: Kitti Nanasi
> Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java
> io exception "Unable to close file because the last block does not have
> enough number of replicas" on client file closure. The common workaround is
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]