I believe the format must have killed it all away. You might be lucky
with the SNN location if you were running it, though - could you check
in the SNN's directory for valid fsimage?

On Thu, Apr 28, 2011 at 12:53 PM, Adarsh Sharma
<[email protected]> wrote:
> I correct my posting that this problem arises due to running a script that
> internally issue below commands as root user:
>
>> bin/hadoop namenode -format
>>bin/start-all.sh
>
> No is it possible to start the previous cluster with the previous data or
> not.
>
> Thanks
>
>
> Adarsh Sharma wrote:
>>
>> Thanks harsh,
>>
>> My dfs.name.dir is /home/hadoop/project/hadoop-0.20.2/name
>> & there is only one file fsimage inimage directory & current directory is
>> empty.
>>
>> But there must be four files fsname, edits, image etc in current
>> directory. How they got deleted even I don't issue format command yet.
>>
>> I think there must be a remedy to get the previous data.
>>
>>
>>
>>
>> Harsh J wrote:
>>>
>>> Hello Adarsh,
>>>
>>> On Thu, Apr 28, 2011 at 11:02 AM, Adarsh Sharma
>>> <[email protected]> wrote:
>>>
>>>>
>>>> After correcting my mistake , when I try to run with the hadoop user, my
>>>> Namenode fails with the below exception\ :
>>>> 2011-04-28 10:53:49,608 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>      at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>>
>>>
>>> Do you still have the valid contents in your good ${dfs.name.dir}? You
>>> should be able to recover with that.
>>>
>>>
>>
>>
>
>



-- 
Harsh J

Reply via email to