[ 
https://issues.apache.org/jira/browse/HDFS-788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12783405#action_12783405
 ] 

dhruba borthakur commented on HDFS-788:
---------------------------------------

I agree, Being conservative is better than failing the write!

> Datanode behaves badly when one disk is very low on space
> ---------------------------------------------------------
>
>                 Key: HDFS-788
>                 URL: https://issues.apache.org/jira/browse/HDFS-788
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>            Reporter: Todd Lipcon
>
> FSDataset.getNextVolume() uses FSVolume.getAvailable() to determine whether 
> to allocate a block on a volume. This doesn't factor in other in-flight 
> blocks that have been "promised" space on the volume. The resulting issue is 
> that, if a volume is nearly full but not full, multiple blocks will be 
> allocated on that volume, and then they will all hit "Out of space" errors 
> during the write.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to