[
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611547#comment-16611547
]
Xiao Chen commented on HDFS-13882:
----------------------------------
Thanks for investigating into this, [~knanasi].
I think what we can do here to prevent the exponential backoff going too far
off, is to have another variable of the idea like maximum wait time between
retries'. If the backoff number is larger than that, we simply change it to a
fixed sleep retry. The unlimited exponential backoff just doesn't make sense
beyond a certain point.
Unless [~arpitagarwal] or [~kihwal] has concerns on changing the default of 5
retries here.
> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to
> 10
> -------------------------------------------------------------------------------
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.1.0
> Reporter: Kitti Nanasi
> Assignee: Kitti Nanasi
> Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java
> io exception "Unable to close file because the last block does not have
> enough number of replicas" on client file closure. The common workaround is
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]