This is the result of HADOOP-1242. I prefer if it did not require
presence of this image directory.
For now you could manually create image/fsimage file in name/ directory.
If you write random 4 bytes to fsimage, you have 50% chance of success.
Basically readInt() from the file should be less than -3. Only first 4
bytes are important.
Raghu.
Dennis Kubes wrote:
All,
I upgraded to the most recent trunk of Hadoop and I started getting the
error below, where /d01/hadoop/dfs/name is our namenode directory:
org.apache.hadoop.dfs.InconsistentFSStateException: Directory
/d01/hadoop/dfs/name is in an inconsistent state:
/d01/hadoop/dfs/name/image does not exist.
The old configuration was under a directory structure like:
/d01/hadoop/dfs/name/current
After backup up the namenode and playing around a little I found that if
I reformatted the namenode and then copied over the old files that were
in the current directory back into the "new" current directory that the
namenode would start up.
We have quite a bit of data on this cluster (around 8T) and I am a
little nervous about starting up the entire cluster without a little
clarification. If I startup the cluster now, will any old data blocks
be deleted or will those data blocks remain because I copied over the
old configuration files into the new "current"?
Is there another way to upgrade this DFS cluster? Any help is appreciated.
Dennis Kubes