Hi, I did this upgrade on a similar cluster some weeks ago. I use the following method (all commands run with hadoop demons process owner):
* Stop cluster. * Start only HDFS with : start-dfs.sh -upgrade * At this point the migration has started. * You can check the status with hadoop dfsadmin -upgradeProgress status * Now you can access files for reading. * If you find any issue can rollback migration with : start-dfs.sh -rollback * If everything seems ok you can mark the upgrade as finalized: hadoop dfsadmin -finalizeUpgrade -----Original Message----- From: Eli Finkelshteyn [mailto:iefin...@gmail.com] Sent: martes, 29 de mayo de 2012 20:29 To: common-user@hadoop.apache.org Subject: Best Practices for Upgrading Hadoop Version? Hi, I'd like to upgrade my Hadoop cluster from version 0.20.2-CDH3B4 to 1.0.3. I'm running a pretty small cluster of just 4 nodes, and it's not really being used by too many people at the moment, so I'm OK if things get dirty or it goes offline for a bit. I was looking at the tutorial at wiki.apache.org <http://wiki.apache.org/hadoop/Hadoop_Upgrade>, but it seems either outdated, or missing information. Namely, from what I've noticed so far, it doesn't specify what user any of the commands should be run as. Since I'm sure this is something a lot of people have needed to do, Is there a better tutorial somewhere for updating Hadoop version in general? Eli ________________________________ Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com