[
https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16034455#comment-16034455
]
Weiwei Yang commented on HDFS-11917:
------------------------------------
Hi [~fireling]
This is not a bug, this is how HDFS works. A file (if less than block size) is
always stored in a block but that doesn't mean this file will take a block size
from the system. Datanode has a background thread to calculate the disk usage
and report that back to NN in certain interval, which is defined by
"fs.du.interval". So it needs a while until NN acknowledges the actual space
utilized. I am closing this as INVALID. Next time, you can try raise your
question in user mailing list before filing a jira. Feel free to reopen if you
disagree. Thank you.
> Why when using the hdfs nfs gateway, a file which is smaller than one block
> size required a block
> -------------------------------------------------------------------------------------------------
>
> Key: HDFS-11917
> URL: https://issues.apache.org/jira/browse/HDFS-11917
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: nfs
> Affects Versions: 2.8.0
> Reporter: BINGHUI WANG
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs
> gateway. I found that if the file which size is smaller than one block(128M),
> it will still takes one block(128M) of hdfs storage by this way. But after a
> few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it
> will takes one block(128M) at first. After a few minitues the excess
> storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]