[
https://issues.apache.org/jira/browse/HADOOP-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12570943#action_12570943
]
martint edited comment on HADOOP-2845 at 2/20/08 9:17 PM:
------------------------------------------------------------------
I've been able to reproduce the failure on Solaris with ZFS. It turns out that
metadata updates on ZFS are asynchonous, so DU does not see size change
reflected immediately.
It used to work without the "-k" flag because by the time du runs, it can see 2
& 3 blocks, respectively, and the oldSize < newSize assertion holds true. With
-k, those numbers are divided by 2 (integer math), so you get 1 < 1, which
fails.
According to the comments on the test, its intention is to ensure that DU does
not get called multiple times if interval is > 0. This is actually a function
of the Shell class (which DU extends), so my recommendation is to create a
separate test to ensure that condition holds.
I'm working on a patch.
was (Author: martint):
I've been able to reproduce the failure on Solaris with ZFS. It turns out
that metadata updates on ZFS are asynchonous, to DU does not see size change
reflected immediately.
It used to work without the "-k" flag because by the time du runs, it can see 2
& 3 blocks, respectively, and the oldSize < newSize assertion holds true. With
-k, those numbers are divided by 2 (integer math), so you get 1 < 1, which
fails.
According to the comments on the test, its intention is to ensure that DU does
not get called multiple times if interval is > 0. This is actually a function
of the Shell class (which DU extends), so my recommendation is to create a
separate test to ensure that condition holds.
I'm working on a patch.
> dfsadmin disk utilization report on Solaris is wrong
> ----------------------------------------------------
>
> Key: HADOOP-2845
> URL: https://issues.apache.org/jira/browse/HADOOP-2845
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.16.0
> Reporter: Martin Traverso
> Fix For: 0.17.0
>
> Attachments: HADOOP-2845.patch
>
>
> dfsadmin reports 2x disk utilization on some platforms (Solaris, MacOS). The
> reason for this is that org.apache.hadoop.fs.DU is relying on du's default
> block size when reporting sizes and assuming they are 1024 byte blocks. This
> works fine on Linux, but du Solaris and MacOS uses 512-byte blocks to report
> disk usage.
> DU should use "du -sk" instead of "du -s" to force the command to report
> sizes based on 1024 byte blocks.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.