[jira] [Updated] (HDFS-13882) Set a maximum delay for retrying locateFollowingBlock

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13882:
--
Fix Version/s: (was: 3.2.0)
   3.3.0

> Set a maximum delay for retrying locateFollowingBlock
> -
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum delay for retrying locateFollowingBlock

2018-10-10 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13882:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanks Kitti for the contribution, and Arpit / Shweta for the comments!

> Set a maximum delay for retrying locateFollowingBlock
> -
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum delay for retrying locateFollowingBlock

2018-10-10 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13882:
-
Summary: Set a maximum delay for retrying locateFollowingBlock  (was: Set a 
maximum for the delay before retrying locateFollowingBlock)

> Set a maximum delay for retrying locateFollowingBlock
> -
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org