[ 
https://issues.apache.org/jira/browse/HDFS-788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14078649#comment-14078649
 ] 

Arpit Agarwal commented on HDFS-788:
------------------------------------

>From a cursory look I don't think it is fixed. Would need to take a closer 
>look to be sure.

> Datanode behaves badly when one disk is very low on space
> ---------------------------------------------------------
>
>                 Key: HDFS-788
>                 URL: https://issues.apache.org/jira/browse/HDFS-788
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Todd Lipcon
>
> FSDataset.getNextVolume() uses FSVolume.getAvailable() to determine whether 
> to allocate a block on a volume. This doesn't factor in other in-flight 
> blocks that have been "promised" space on the volume. The resulting issue is 
> that, if a volume is nearly full but not full, multiple blocks will be 
> allocated on that volume, and then they will all hit "Out of space" errors 
> during the write.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to