If /tmp/hadoop-user/dfs/namesecondary doesn't exist now, then yes, you need to start over. Try keeping multiple copies and on a location thats off-/tmp (use dfs.name.dir config).
On Wed, Jun 5, 2013 at 3:53 PM, Han JU <[email protected]> wrote: > Thanks Ted. > In fact it's an experimental cluster so no checkpointing... and snn is > running in the same machine as nn, so I think its copy of metadata is also > deleted by my colleague ... > > Does this mean that I have no choice but format/start over my HDFS? > > > 2013/6/5 Ted Xu <[email protected]> >> >> Hi Han, >> >> HDFS metadata cannot be fully reconstructed by datanode report. >> >> If you have deployed a checkpoint node/secondary namenode, you can copy >> the metadata to namenode and restart. This could recover most of the >> metadata. >> >> >> On Wed, Jun 5, 2013 at 5:30 PM, Han JU <[email protected]> wrote: >>> >>> Hi, >>> >>> The folder /tmp/hadoop-user/dfs/name is accidentally deleted by a >>> colleague (along with all other things in this directory), is there any >>> means to recover this directory? >>> I know the directory where dfs is located in every datanode. Can I >>> reconstruct the deleted files without formatting the hdfs? >>> >>> Thanks! >>> >>> -- >>> JU Han >>> >>> Software Engineer Intern @ KXEN Inc. >>> UTC - Université de Technologie de Compiègne >>> GI06 - Fouille de Données et Décisionnel >>> >>> +33 0619608888 >> >> >> >> >> -- >> Regards, >> Ted Xu > > > > > -- > JU Han > > Software Engineer Intern @ KXEN Inc. > UTC - Université de Technologie de Compiègne > GI06 - Fouille de Données et Décisionnel > > +33 0619608888 -- Harsh J
