This 200 bytes is just a "mental helper" not a precise measure. And it does
NOT take replication into account.
Each replica block has again another item of approx. 200 bytes in the NN
memory.
MK


2015-03-25 17:16 GMT+00:00 Mich Talebzadeh <[email protected]>:

> Great. Does that 200 bytes for each block include overhead for three
> replicas? So with 128MB block a 1GB file will be 8 blocks with 200 + 8x200
> around 1800 bytes memory in namenode?
>
> Thx
> Let your email find you with BlackBerry from Vodafone
> ------------------------------
> *From: * Mirko Kämpf <[email protected]>
> *Date: *Wed, 25 Mar 2015 16:08:02 +0000
> *To: *[email protected]<[email protected]>; <[email protected]
> >
> *ReplyTo: * [email protected]
> *Subject: *Re: can block size for namenode be different from wdatanode
> block size?
>
> Correct, let's say you run the NameNode with just 1GB of RAM.
> This would be a very strong limitation for the cluster. For each file we
> need about 200 bytes and for each block as well. Now we can estimate the
> max. capacity depending on HDFS-Blocksize and average File size.
>
> Cheers,
> Mirko
>
> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <[email protected]>:
>
>> Hi Mirko,
>>
>> Thanks for feedback.
>>
>> Since i have worked with in memory databases, this metadata caching
>> sounds more like an IMDB that caches data at start up from disk resident
>> storage.
>>
>> IMDBs tend to get issues when the cache cannot hold all data. Is this the
>> case the case with metada as well?
>>
>> Regards,
>>
>> Mich
>> Let your email find you with BlackBerry from Vodafone
>> ------------------------------
>> *From: * Mirko Kämpf <[email protected]>
>> *Date: *Wed, 25 Mar 2015 15:20:03 +0000
>> *To: *[email protected]<[email protected]>
>> *ReplyTo: * [email protected]
>> *Subject: *Re: can block size for namenode be different from datanode
>> block size?
>>
>> Hi Mich,
>>
>> please see the comments in your text.
>>
>>
>>
>> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <[email protected]>:
>>
>>>
>>> Hi,
>>>
>>> The block size for HDFS is currently set to 128MB by defauilt. This is
>>> configurable.
>>>
>> Correct, an HDFS client can overwrite the cfg-property and define a
>> different block size for HDFS blocks.
>>
>>>
>>> My point is that I assume this  parameter in hadoop-core.xml sets the
>>> block size for both namenode and datanode.
>>
>> Correct, the block-size is a "HDFS wide setting" but in general the
>> HDFS-client makes the blocks.
>>
>>
>>> However, the storage and
>>> random access for metadata in nsamenode is different and suits smaller
>>> block sizes.
>>>
>> HDFS blocksize has no impact here. NameNode metadata is held in memory.
>> For reliability it is dumped to local discs of the server.
>>
>>
>>>
>>> For example in Linux the OS block size is 4k which means one HTFS blopck
>>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>>> useful and smaller block size will be suitable and hence my question.
>>>
>> Remember, metadata is in memory. The fsimage-file, which contains the
>> metadata
>> is loaded on startup of the NameNode.
>>
>> Please be not confused by the two types of block-sizes.
>>
>> Hope this helps a bit.
>> Cheers,
>> Mirko
>>
>>
>>>
>>> Thanks,
>>>
>>> Mich
>>>
>>
>>
>

Reply via email to