Hi Mich!

The block size you are referring to is used only on the datanodes. The file 
that the namenode writes (fsimage OR editlog) is not chunked using this block 
size.
HTHRavi
 


     On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh 
<[email protected]> wrote:
   

 
Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich

  

Reply via email to