Hi Mich,

please see the comments in your text.



2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <[email protected]>:

>
> Hi,
>
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
>
Correct, an HDFS client can overwrite the cfg-property and define a
different block size for HDFS blocks.

>
> My point is that I assume this  parameter in hadoop-core.xml sets the
> block size for both namenode and datanode.

Correct, the block-size is a "HDFS wide setting" but in general the
HDFS-client makes the blocks.


> However, the storage and
> random access for metadata in nsamenode is different and suits smaller
> block sizes.
>
HDFS blocksize has no impact here. NameNode metadata is held in memory. For
reliability it is dumped to local discs of the server.


>
> For example in Linux the OS block size is 4k which means one HTFS blopck
> size  of 128MB can hold 32K OS blocks. For metadata this may not be
> useful and smaller block size will be suitable and hence my question.
>
Remember, metadata is in memory. The fsimage-file, which contains the
metadata
is loaded on startup of the NameNode.

Please be not confused by the two types of block-sizes.

Hope this helps a bit.
Cheers,
Mirko


>
> Thanks,
>
> Mich
>

Reply via email to