[
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611579#comment-16611579
]
Xiao Chen commented on HDFS-13882:
----------------------------------
Thanks for the reply [~arpitagarwal], and for sharing your internal defaults.
If the community is not comfortable, we should't change the default for compat
reasons. :)
For this jira, it feels we can probably do the improvement of adding the
maximum sleep between retries - otherwise if someone configs this larger, the
retry would be ridiculously long (e.g. 10 times, 409 secs seems pretty long to
me). The added maximum config can default to not take effect for compat.
> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to
> 10
> -------------------------------------------------------------------------------
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.1.0
> Reporter: Kitti Nanasi
> Assignee: Kitti Nanasi
> Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java
> io exception "Unable to close file because the last block does not have
> enough number of replicas" on client file closure. The common workaround is
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]