[
https://issues.apache.org/jira/browse/HADOOP-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Konstantin Shvachko updated HADOOP-2927:
----------------------------------------
Attachment: testDU.patch
The problem turned out to be in the test. This is how it works.
It first calculates the file system blockSize, then creates a file of size 2 *
blockSize,
and then verifies that du for this file returns 2 * blockSize.
The blockSize is calculated by writing a small file (128 bytes) and calling
du on it, which is supposed to return a multiple of the block size in kilobytes.
In ntfs small files occupy only 1K (per du report) even though the block size
is 4K.
NTFS is probably not allocating any blocks for small files and keeps their data
directly in MFT.
Anyway, the test is somewhat inconsistent in that it verifies du based on the
blockSize,
which is calculated using the same du.
I changed the test so that it now writes a 32K file and checks that du returns
the same value.
The problem reported in HADOOP-2845 would be caught by such test.
I think this is a more general approach for testing DU.
> Unit test fails on Windows: org.apache.hadoop.fs.TestDU.testDU
> --------------------------------------------------------------
>
> Key: HADOOP-2927
> URL: https://issues.apache.org/jira/browse/HADOOP-2927
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.17.0
> Reporter: Mukund Madhugiri
> Assignee: Konstantin Shvachko
> Priority: Blocker
> Fix For: 0.17.0
>
> Attachments: testDU.patch
>
>
> Unit test fails on Windows: org.apache.hadoop.fs.TestDU.testDU
> Here is the output from the test: org.apache.hadoop.fs.TestDU.testDU:
> junit.framework.AssertionFailedError: expected:<2048> but was:<4096>
> at org.apache.hadoop.fs.TestDU.testDU(TestDU.java:82)
> The failure points to the new test code in TestDU.java that just went in as
> part of HADOOP-2845
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.