Hi Experts,

First, I believe it's no doubt that HDFS use only what it needs on the
local file system. For example, we store a file(12 KB size) to HDFS, and
HDFS only use 12 KB on the local file system, and won't use 64 MB(block
size) on the local file system for that file.

However, I found the block sizes shown by 'fsck' and '-stat' are
inconsistent:

1) hadoop fsck /user/user1/filesize/derby.jar -files -blocks -locations:
*output:*
...
*BP-1600629425-9.30.122.112-1395627917492:blk_1073743264_2443 len=2673375*
...
*Total blocks (validated):      1 (avg. block size 2673375 B)*
...
*conslusion:*
The block size is *2673375 B *shown by fsck.

2) hadoop dfs -stat "%b %n %o %r %Y" /user/user1/filesize/derby.jar:
*output:*
2673375 derby.jar *134217728 *2 1396662626191
*conslusion:*
The block size is *134217728 **B *shown by stat.

Also, if I browser this file from http://namenode:50070, the file size of
/user/user1/filesize/derby.jar equals to 2.5 MB(2673375 B), however the
block size equals to *128 MB(134217728 **B)*.

*Why block sizes shown by 'fsck' and '-stat' are inconsistent?*

Reply via email to