[
https://issues.apache.org/jira/browse/HADOOP-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
MaoYuan Xian updated HADOOP-10434:
----------------------------------
Fix Version/s: 2.3.0
Status: Patch Available (was: Open)
> Is it possible to use "df" to calculate the dfs usage instead of "du"
> ---------------------------------------------------------------------
>
> Key: HADOOP-10434
> URL: https://issues.apache.org/jira/browse/HADOOP-10434
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs
> Affects Versions: 2.3.0
> Reporter: MaoYuan Xian
> Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HADOOP-10434-1.patch
>
>
> When we run datanode from the machine with big disk volume, it's found du
> operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk
> performance.
> As we use the whole disk for hdfs storage, it is possible calculate volume
> usage via "df" command. Is it necessary adding the "df" option for usage
> calculation in hdfs
> (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?
--
This message was sent by Atlassian JIRA
(v6.2#6252)