[
https://issues.apache.org/jira/browse/HADOOP-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12502045
]
Raghu Angadi commented on HADOOP-1463:
--------------------------------------
+1. Except that during block report we should probably reset with 'du' instead
of summing over block sizes, so that it takes all the other overhead of
Datanode directory into account ( native filesystem, directories, 'previous'
directories, metadata files, tmp directory etc). But it could be updated with
block sizes as you described. Accrued error till next block report would be
small.
> dfs should report total size of all the space that dfs is using
> ---------------------------------------------------------------
>
> Key: HADOOP-1463
> URL: https://issues.apache.org/jira/browse/HADOOP-1463
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.12.3
> Reporter: Hairong Kuang
> Fix For: 0.14.0
>
>
> Currently namenode reports two statistics back to the client:
> 1. The total capacity of dfs. This is a sum of all datanode's capacities,
> each of which is calculated by datanode summing all data directories disk
> space.
> 2. The total remaining space of dfs. This is a sum of all datanodes's
> remaining space. Each datanode's remaining space is calculated by using the
> following formula: remaining space = unused space -
> capacity*unusableDiskPercentage - reserved space. So the remaining space
> shows how much space that the dfs can still use, but it does not show the
> size of unused space.
> Each dfs client caculates the total dfs used space by substracting remaining
> space from the total capacity. So the used space does not accurately shows
> the space that dfs is using. However it is a very important number that dfs
> should provide.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.