You might have bumped into http://issues.apache.org/jira/browse/HADOOP-1443

>From the JIRA issue, there isn't an available patch for it yet:)

Thanks,
dhruba

-----Original Message-----
From: Dennis Kubes [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 31, 2007 11:58 PM
To: hadoop-user@lucene.apache.org
Subject: Re: Upgrade of DFS - Urgent

Okay.  My procedure was: I backup old current directory, did a namenode 
reformat, then copied old current files into the reformated current 
directory.  Then started up namenode (while praying very hard and 
sweating profusely).  Everything seems to have worked fine.  I am able 
to copy files to an from the dfs and all block reports look good as does 
the fsck output.

On another note I am noticing a bug in the getBlockLocations where a 
zero length file will through an ArrayIndexOutOfBoundsException.  I am 
still tracking this one down in the code and will submit a patch when I 
have found it.

Dennis Kubes

Raghu Angadi wrote:
> 
> This is the result of HADOOP-1242. I prefer if it did not require 
> presence of this image directory.
> 
> For now you could manually create image/fsimage file in name/ directory. 
> If you write random 4 bytes to fsimage, you have 50% chance of success. 
> Basically readInt() from the file should be less than -3. Only first 4 
> bytes are important.
> 
> Raghu.
> 
> Dennis Kubes wrote:
>> All,
>>
>> I upgraded to the most recent trunk of Hadoop and I started getting 
>> the error below, where /d01/hadoop/dfs/name is our namenode directory:
>>
>> org.apache.hadoop.dfs.InconsistentFSStateException: Directory 
>> /d01/hadoop/dfs/name is in an inconsistent state: 
>> /d01/hadoop/dfs/name/image does not exist.
>>
>> The old configuration was under a directory structure like:
>>
>> /d01/hadoop/dfs/name/current
>>
>> After backup up the namenode and playing around a little I found that 
>> if I reformatted the namenode and then copied over the old files that 
>> were in the current directory back into the "new" current directory 
>> that the namenode would start up.
>>
>> We have quite a bit of data on this cluster (around 8T) and I am a 
>> little nervous about starting up the entire cluster without a little 
>> clarification.  If I startup the cluster now, will any old data blocks 
>> be deleted or will those data blocks remain because I copied over the 
>> old configuration files into the new "current"?
>>
>> Is there another way to upgrade this DFS cluster?  Any help is 
>> appreciated.
>>
>> Dennis Kubes
>>
> 

Reply via email to