[
https://issues.apache.org/jira/browse/HADOOP-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635700#comment-14635700
]
Hadoop QA commented on HADOOP-10434:
------------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch | 0m 0s | The patch command could not apply
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12648325/HADOOP-10434-1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5137b38 |
| Console output |
https://builds.apache.org/job/PreCommit-HADOOP-Build/7319/console |
This message was automatically generated.
> Is it possible to use "df" to calculate the dfs usage instead of "du"
> ---------------------------------------------------------------------
>
> Key: HADOOP-10434
> URL: https://issues.apache.org/jira/browse/HADOOP-10434
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs
> Affects Versions: 2.3.0
> Reporter: MaoYuan Xian
> Priority: Minor
> Labels: BB2015-05-TBR
> Attachments: HADOOP-10434-1.patch
>
>
> When we run datanode from the machine with big disk volume, it's found du
> operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk
> performance.
> As we use the whole disk for hdfs storage, it is possible calculate volume
> usage via "df" command. Is it necessary adding the "df" option for usage
> calculation in hdfs
> (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)