Thanks Mohammad :-),

I just read this concept of secondary NameNode. Thank you for your reply.
Mohammad I am not finding the way to implement would you please explain me 
regarding to recover namenode, Iam getting confuse.

Thanks & Regards
Yogesh Kumar Dhari

________________________________________
From: Mohammad Tariq [donta...@gmail.com]
Sent: Friday, July 20, 2012 3:17 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: NameNode fails

Hi yogesh,

       First of all, we should always keep it mind that Secondary
Namenode is not a backup for the Namenode. By its name, it gives a
sense that its a backup for the Namenode, but in reality its not.
Namenode stores data in 2 files :
1- fsimage - snapshot of the filesystem when namenode started
2- Edit logs - contain the sequence of changes made to the filesystem
after namenode started.

The sole purpose of Secondary Namenode is to have a checkpoint in HDFS
and it acts as a helper of the Namenode. It basically :
1- gets the edit logs from the namenode in regular intervals and
applies to fsimage
2- once it has new fsimage, it copies back to namenode

Namenode uses this fsimage for the next restart.

Regards,
    Mohammad Tariq


On Fri, Jul 20, 2012 at 1:59 PM,  <yogesh.kuma...@wipro.com> wrote:
> Hi Bejoy,
>
> Its done now, Error log was showing namenode is not formated,
> I closed all previous terminal and restarted it after formatting.
>
> its running now.
>
> Please suggest me that if in case it gets crashed then how do I recover it
> from Secondary Namenode. how should I proceed for that
>
>
> Thanks & regards
> Yogesh Kumar Dhari
> ________________________________
> From: Bejoy KS [bejoy.had...@gmail.com]
> Sent: Friday, July 20, 2012 12:56 PM
> To: Yogesh Kumar (WT01 - Communication and Media);
> hdfs-user@hadoop.apache.org
> Subject: Re: NameNode fails
>
> Hi Yogesh
>
> Please post in the error logs/messages if you find any.
>
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> ________________________________
> From: <yogesh.kuma...@wipro.com>
> Date: Fri, 20 Jul 2012 07:21:24 +0000
> To: <hdfs-user@hadoop.apache.org>; <bejoy.had...@gmail.com>
> Subject: RE: NameNode fails
>
> Thanks Bejoy, Mohammad and Vignesh :-).
>
> I have done the suggested by you and made thses changes. and formatted the
> namenode and trying to start cluster.
> but now name node is not starting :-(
>
>
> hdfs-site.xml
>
> **********************************************************
> <configuration>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
>
>     <property>
>         <name>dfs.name.dir</name>
>         <value>/HADOOP/hadoop-0.20.2/hadoop_name_dirr</value>
>     </property>
>
>     <property>
>         <name>dfs.data.dir</name>
>         <value>/HADOOP/hadoop-0.20.2/hadoop_data_dirr</value>
>     </property>
>
> </configuration>
>
> **************************************************************
>
>
>
> hdfs-core.xml
>
> **************************************************************
>
> <configuration>
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://localhost:9000</value>
>     </property>
>
>     <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/HADOOP/hadoop-0.20.2/hadoop_temp_dirr</value>
>     <description>A base for other temporary directories.</description>
> </property>
>
> </configuration>
>
> *****************************************************************
>
> Please suggest.
>
> Thanks & Regards
> Yogesh Kumar Dhari
>
>
> ________________________________
> From: Mohammad Tariq [donta...@gmail.com]
> Sent: Friday, July 20, 2012 12:17 PM
> To: hdfs-user@hadoop.apache.org; bejoy.had...@gmail.com
> Subject: Re: NameNode fails
>
> Hi yogesh,
>     Do as suggested by bejoy and add the hadoop.tmp.dir in your
> core-site.xml file.it's value defaults to the /tmp dir which gets emptied at
> each restart so all the data is lost... also add dfs.name.dir and
> dfs.data.dir properties into your hdfs-site.xml file to avoid the data loss.
>
> On Friday, July 20, 2012, Bejoy KS <bejoy.had...@gmail.com> wrote:
>> Hi Yogesh
>>
>> Is your dfs.name.dir pointing to /tmp dir? If so try changing that to any
>> other dir . The contents of /tmp may get wiped out on OS restarts.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> ________________________________
>> From: <yogesh.kuma...@wipro.com>
>> Date: Fri, 20 Jul 2012 06:20:02 +0000
>> To: <hdfs-user@hadoop.apache.org>
>> ReplyTo: hdfs-user@hadoop.apache.org
>> Subject: NameNode fails
>> Hello All :-),
>>
>> I am new to Hdfs.
>>
>> I have installed single node hdfs and started all nodes, every nodes gets
>> started and work fine.
>> But when I shutdown my system or Restart it, then i try to run all nodes
>> but Namenode doesn't start .
>>
>> to Start it i need to format the namenode and all data gets wash off :-(.
>>
>> Please help me and suggest me regarding this and how can I recover
>> namenode from secondary name node on single node setup
>>
>>
>> Thanks & Regards
>> Yogesh Kumar Dhari
>>
>> Please do not print this email unless it is absolutely necessary.
>>
>> The information contained in this electronic message and any attachments
>> to this message are intended for the exclusive use of the addressee(s) and
>> may contain proprietary, confidential or privileged information. If you are
>> not the intended recipient, you should not disseminate, distribute or copy
>> this e-mail. Please notify the sender immediately and destroy all copies of
>> this message and any attachments.
>>
>> WARNING: Computer viruses can be transmitted via email. The recipient
>> should check this email and any attachments for the presence of viruses. The
>> company accepts no liability for any damage caused by any virus transmitted
>> by this email.
>>
>> www.wipro.com
>
> --
> Regards,
>     Mohammad Tariq
>
> Please do not print this email unless it is absolutely necessary.
>
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not
> the intended recipient, you should not disseminate, distribute or copy this
> e-mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments.
>
> WARNING: Computer viruses can be transmitted via email. The recipient should
> check this email and any attachments for the presence of viruses. The
> company accepts no liability for any damage caused by any virus transmitted
> by this email.
>
> www.wipro.com
>
> Please do not print this email unless it is absolutely necessary.
>
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not
> the intended recipient, you should not disseminate, distribute or copy this
> e-mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments.
>
> WARNING: Computer viruses can be transmitted via email. The recipient should
> check this email and any attachments for the presence of viruses. The
> company accepts no liability for any damage caused by any virus transmitted
> by this email.
>
> www.wipro.com

Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com

Reply via email to