[
https://issues.apache.org/jira/browse/HADOOP-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12502033
]
Hairong Kuang commented on HADOOP-1463:
---------------------------------------
I feel that doing "du" to get the size of data directories is too costly.
Current code does this every 3 seconds. What we can do is to have a counter
keeping track of the size of all blocks at each datanode. It gets update
whenever a block is written or deleted. It gets reset by summing up all block
size when a block report is sent.
> dfs should report total size of all the space that dfs is using
> ---------------------------------------------------------------
>
> Key: HADOOP-1463
> URL: https://issues.apache.org/jira/browse/HADOOP-1463
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.12.3
> Reporter: Hairong Kuang
> Fix For: 0.14.0
>
>
> Currently namenode reports two statistics back to the client:
> 1. The total capacity of dfs. This is a sum of all datanode's capacities,
> each of which is calculated by datanode summing all data directories disk
> space.
> 2. The total remaining space of dfs. This is a sum of all datanodes's
> remaining space. Each datanode's remaining space is calculated by using the
> following formula: remaining space = unused space -
> capacity*unusableDiskPercentage - reserved space. So the remaining space
> shows how much space that the dfs can still use, but it does not show the
> size of unused space.
> Each dfs client caculates the total dfs used space by substracting remaining
> space from the total capacity. So the used space does not accurately shows
> the space that dfs is using. However it is a very important number that dfs
> should provide.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.