[ 
https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036626#comment-16036626
 ] 

BINGHUI WANG edited comment on HDFS-11917 at 6/5/17 7:16 AM:
-------------------------------------------------------------

Hi Weiwei Yang
Thank you for the answer, I’ve got it.


was (Author: fireling):
Thank you for the answer, I’ve got it.

> Why when using the hdfs nfs gateway, a file which is smaller than one block 
> size required a block
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11917
>                 URL: https://issues.apache.org/jira/browse/HDFS-11917
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.8.0
>            Reporter: BINGHUI WANG
>            Assignee: Weiwei Yang
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs 
> gateway. I found that if the file which size is smaller than one block(128M), 
> it will still takes one block(128M) of hdfs storage by this way. But after a 
> few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it 
> will takes one block(128M) at first. After a few minitues the excess 
> storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to