Hi,
we discussed the three kinds of data looses, hardware, software or human errors.
I would love to figure out what was my problem today. :)

Scenario:
+ updated from hadoop .2.1 to .4.
+ problems to get all datanodes started
+ downgrade to hadoop .3.1
+ error message of incompatible dfs (I guess . already had started to write to the log)

All transaction done with .2 during last hours lost. Means the data I created and copy was not in the dfs any more. I guess the update / downgrade process destroyed the log but the image itself was still ok.

I ended up with a complete corrupted dfs - I guess since the lost dfs namenode transaction log.

Wouldn't be better in case we discover a version conflict in the dfs, that the user need to do manually confirm that the data have to converted into a new format.
Any thoughts?

Thanks.
Stefan

Reply via email to