[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14313:
-------------------------------
    Description: 
There are two ways of DU/DF getting used space that are insufficient.
 #  Running DU across lots of disks is very expensive and running all of the 
processes at the same time creates a noticeable IO spike.
 #  Running DF is inaccurate when the disk sharing by multiple datanode or 
other servers.

 Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
is very small and accurate. 

  was:
There are two ways of DU/DF getting used space that are insufficient.
 #  Running DU across lots of disks is very expensive and running all of the 
processes at the same time creates a noticeable IO spike.
 #  Running DF is inaccurate when the disk sharing by multiple datanode or 
other servers.

 Getting hdfs used space from  FsDatasetImpl#volumeMap is very small and 
accurate. 


> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo instead of df/du
> -----------------------------------------------------------------------------
>
>                 Key: HDFS-14313
>                 URL: https://issues.apache.org/jira/browse/HDFS-14313
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, performance
>    Affects Versions: 2.8.0, 3.0.0-alpha1
>            Reporter: Lisheng Sun
>            Priority: Major
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to