[
https://issues.apache.org/jira/browse/HADOOP-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571691#action_12571691
]
Konstantin Shvachko commented on HADOOP-2845:
---------------------------------------------
> wait(5000)
This is too bad, we are trying to avoid using waits in tests, mainly because it
is not deterministic.
Can't believe ZFS doesn't have meta-data synchronization, it's posix right?
Yes, 0 is stable. I du files that's been created last year and get the same
result.
I don't know about attribute caching, but I guess timeouts are in seconds, not
in hours or years.
> dfsadmin disk utilization report on Solaris is wrong
> ----------------------------------------------------
>
> Key: HADOOP-2845
> URL: https://issues.apache.org/jira/browse/HADOOP-2845
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.16.0
> Reporter: Martin Traverso
> Assignee: Martin Traverso
> Fix For: 0.17.0
>
> Attachments: HADOOP-2845-1.patch, HADOOP-2845.patch
>
>
> dfsadmin reports 2x disk utilization on some platforms (Solaris, MacOS). The
> reason for this is that org.apache.hadoop.fs.DU is relying on du's default
> block size when reporting sizes and assuming they are 1024 byte blocks. This
> works fine on Linux, but du Solaris and MacOS uses 512-byte blocks to report
> disk usage.
> DU should use "du -sk" instead of "du -s" to force the command to report
> sizes based on 1024 byte blocks.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.