Hi Bejoy,

Thank you for your answer.
- I read about Secondary Name Node, but it is not realtime solution. In
this solution are some minutes delay...
- With NFS how can i make it, the Data Node automatic change to another
Name Node..?
  Maybe this: http://wiki.apache.org/hadoop/NameNodeFailover or i found
this https://issues.apache.org/jira/browse/HDFS-976

Somebody has experience with Avatar Node?

Tibi


2012/3/22 Bejoy Ks <bejoy.had...@gmail.com>

> Hi Tibi
>       To recover from a Name Node failure, the following are
> the suggested approaches as of now
>
> - Maintain a SNN which does periodic check pointing
> *
> http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html#Secondary+NameNode
> *
> - Make sure you configure a remote NFS mount, so you have a remote copy of
> fs image on another server even if you fully lose out the running NN
>
> There is some awesome work going on for HA within hadoop project itself,
> for details
> *https://issues.apache.org/jira/browse/HDFS-1623*
>
> Regards
> Bejoy KS
>
>
> On Thu, Mar 22, 2012 at 7:56 PM, Tibor Korocz <tkor...@gmail.com> wrote:
>
>> Hi,
>>
>> I am new in Hadoop. I created a four nodes cluster 3 slaves, and 1 master
>> (this is onyl the test system).
>> It works fine, i used this howto:
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/.
>> My question is, what is the best solution to make the master(namenode)
>> fail over, i read a lot, but i dont now what is the best.
>> I found this howto:
>> http://www.cloudera.com/blog/2009/07/hadoop-ha-configuration/ , but if
>> it is possible, i do not want to use DRBD.
>> I hope somebody can help me. Sorry my english.... :)
>>
>> Thanks.
>> Tibi
>>
>
>

Reply via email to