[ https://issues.apache.org/jira/browse/HADOOP-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12511887 ]
Hairong Kuang commented on HADOOP-1463: --------------------------------------- +1 on Koji's proposal 2. I am reading more of the code. The current implementation is interpreted the reserved space as the space reserved per volumn. We want it to be the space reserved per datanode, right? I also found out that the period for running "df" is configurable in dfs by setting the value of "dfs.df.interval". The default value is 3000msec. should we change the default value to be 1 min? > dfs should report total size of all the space that dfs is using > --------------------------------------------------------------- > > Key: HADOOP-1463 > URL: https://issues.apache.org/jira/browse/HADOOP-1463 > Project: Hadoop > Issue Type: Improvement > Components: dfs > Affects Versions: 0.12.3 > Reporter: Hairong Kuang > Assignee: Hairong Kuang > Fix For: 0.14.0 > > > Currently namenode reports two statistics back to the client: > 1. The total capacity of dfs. This is a sum of all datanode's capacities, > each of which is calculated by datanode summing all data directories disk > space. > 2. The total remaining space of dfs. This is a sum of all datanodes's > remaining space. Each datanode's remaining space is calculated by using the > following formula: remaining space = unused space - > capacity*unusableDiskPercentage - reserved space. So the remaining space > shows how much space that the dfs can still use, but it does not show the > size of unused space. > Each dfs client caculates the total dfs used space by substracting remaining > space from the total capacity. So the used space does not accurately shows > the space that dfs is using. However it is a very important number that dfs > should provide. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.