On Jun 6, 2006, at 7:36 AM, Stefan Groschupf wrote:
Hi,
we discussed the three kinds of data looses, hardware, software or
human errors.
I would love to figure out what was my problem today. :)
Scenario:
+ updated from hadoop .2.1 to .4.
+ problems to get all datanodes started
+ downgrade to hadoop .3.1
+ error message of incompatible dfs (I guess . already had started to
write to the log)
Actually, you must have downgraded to pre-HADOOP-124. Since I just did
that downgrade on one of my clusters, I figured out the format changes.
(This is only for those of you who are stuck and feel very comfortable
editing binary files in the editor of your choice. Please be careful
and keep backups.)
For downgrading dfs namenodes from 0.3.* or 0.4-dev to 0.3-dev (svn rev
<= 410634):
1. For <name-node-dir>/image/fsimage
a. change byte 4 from fe to ff
b. delete bytes 5 to 8
2. For <name-node-dir>/edits:
a. change byte 4 from fe to ff
-- Owen