I am considering to use a machine to save a
redundant copy of HDFS metadata through setting dfs.name.dir in hdfs-site.xml 
like this (as in YDN):

<property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/dfs/name,/mnt/namenode-backup</value>
    <final>true</final>
</property>

where the two folders are on different machines so that /mnt/namenode-backup 
keeps a copy of hdfs file system information and its machine can be used to 
replace the first machine that fails as namenode. 

So, my question is how big this hdfs metatdata will consume? I guess it is 
proportional to the hdfs capacity. What ratio is that or what size will be for 
150TB hdfs?

Thanks,
Michael


      

Reply via email to