[ 
https://issues.apache.org/jira/browse/HDFS-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004392#comment-16004392
 ] 

maobaolong edited comment on HDFS-11752 at 5/10/17 9:36 AM:
------------------------------------------------------------

[~nroberts] Thank you, i have known the NonDFS mean.


was (Author: maobaolong):
[nroberts] Thank you, i have known the NonDFS mean.

> getNonDfsUsed return 0 if reserved bigger than actualNonDfsUsed
> ---------------------------------------------------------------
>
>                 Key: HDFS-11752
>                 URL: https://issues.apache.org/jira/browse/HDFS-11752
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs
>    Affects Versions: 2.7.1
>            Reporter: maobaolong
>              Labels: datanode, hdfs
>             Fix For: 2.7.1
>
>
> {code}
> public long getNonDfsUsed() throws IOException {
>     long actualNonDfsUsed = getActualNonDfsUsed();
>     if (actualNonDfsUsed < reserved) {
>       return 0L;
>     }
>     return actualNonDfsUsed - reserved;
>   }
> {code}
> The code block above is the function to caculate nonDfsUsed, but in fact it 
> will let the result to be 0L out of expect. Such as this following situation:
> du.reserved  = 50G
> Disk Capacity = 2048G
> Disk Available = 2000G
> Dfs used = 30G
> usage.getUsed() = dirFile.getTotalSpace() - dirFile.getFreeSpace()
>                             = 2048G - 2000G
>                             = 48G
> getActualNonDfsUsed  =  usage.getUsed() - getDfsUsed()
>                                       =  48G - 30G
>                                       = 18G
> 18G < 50G, so the function `getNonDfsUsed` actualNonDfsUsed < reserved, and 
> the NonDfsUsed will return 0, is that logic make sense?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to