[
https://issues.apache.org/jira/browse/HADOOP-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557068#action_12557068
]
Raghu Angadi commented on HADOOP-2549:
--------------------------------------
"reserved" space is incremented by default block size to compensate for the
fact that block size is not transfered in protocol.
Should we add a different version volume.getAvailable() that returns less
negative number if left over space is less than the reserved? Currently it
returns 0.
> hdfs does not honor dfs.du.reserved setting
> -------------------------------------------
>
> Key: HADOOP-2549
> URL: https://issues.apache.org/jira/browse/HADOOP-2549
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.4
> Environment: FC Linux.
> Reporter: Joydeep Sen Sarma
> Priority: Critical
>
> running 0.14.4. one of our drives is smaller and is always getting disk full.
> i reset the disk reservation to 1Gig - but it was filled quickly again.
> i put in some tracing in getnextvolume. the blocksize argument is 0. so every
> volume (regardless of available space) qualifies. here's the trace:
> /* root disk chosen with 0 available bytes. format is
> <available>:<blocksize>*/
> 2008-01-08 08:08:51,918 WARN org.apache.hadoop.dfs.DataNode: Volume
> /var/hadoop/tmp/dfs/data/current:0:0
> /* some other disk chosen with 300G space. */
> 2008-01-08 08:09:21,974 WARN org.apache.hadoop.dfs.DataNode: Volume
> /mnt/d1/hdfs/current:304725631026:0
> i am going to default blocksize to something reasonable when it's zero for
> now.
> this is driving us nuts since our automounter starts failing when we run out
> of space. so everything's broke.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.