[
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611572#comment-16611572
]
Arpit Agarwal commented on HDFS-13882:
--------------------------------------
Also 10 certainly feels too high - that could prevent timely recovery from
legitimate failures.
> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to
> 10
> -------------------------------------------------------------------------------
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.1.0
> Reporter: Kitti Nanasi
> Assignee: Kitti Nanasi
> Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java
> io exception "Unable to close file because the last block does not have
> enough number of replicas" on client file closure. The common workaround is
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]