Hi Konstantin,
Could you give some more information about what happened to you.
- what is your cluster size
9 datanode, 1 namenode.
- amount of data
Total raw bytes: 6023680622592 (5609.98 Gb)
Used raw bytes: 2357053984804 (2195.17 Gb)

- how long did dfs ran without restarting the name node before upgrading
I would say 2 weeks.


we discussed the three kinds of data looses, hardware, software or human errors.
I would love to figure out what was my problem today. :)

Looks like you are not alone :-(
Too bad that the other didn't report it earlier. :)



Scenario:
+ updated from hadoop .2.1 to .4.
+ problems to get all datanodes started

what was the problem with datanodes?
I don't think there was a real problem. I notice that the datanodes was not able to connect to the namenode. Later one I just add a "sleep 5" into the dfs starting script after starteing the name node and that sloved the problem. However at this time I updated, notice that problem, was thinking "ok, not working yet, lets wait another week", downgrading.


+ downgrade to hadoop .3.1
+ error message of incompatible dfs (I guess . already had started to write to the log)

What is the message?

Sorry I can not find the exception anymore in the logs. :-(
Something like "version conflict -1 vs -2" :-o Sorry didn't remember exactly.

Thanks,

Thank you!

Stefan

Reply via email to