Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The following page has been changed by CameronPope: http://wiki.apache.org/hadoop/Hadoop_Upgrade The comment on the change is: After a filesystem upgrade, namenode will not start without -upgrade flag ------------------------------------------------------------------------------ 10. Install new version of Hadoop software. See GettingStartedWithHadoop and HowToConfigure for details. 11. Optionally, update the {{{conf/slaves}}} file before starting, to reflect the current set of active nodes. 12. Optionally, change the configuration of the name nodeâs and the job trackerâs port numbers, to ignore unreachable nodes that are running the old version, preventing them from connecting and disrupting system operation. [[BR]] {{{fs.default.name}}} [[BR]] {{{mapred.job.tracker}}} - 13. Optionally, start name node only. [[BR]] {{{bin/hadoop-daemon.sh start namenode}}} [[BR]] This should convert the checkpoint to the new version format. + 13. Optionally, start name node only. [[BR]] {{{bin/hadoop-daemon.sh start namenode -upgrade}}} [[BR]] This should convert the checkpoint to the new version format. 14. Optionally, run {{{lsr}}} command: [[BR]] {{{bin/hadoop dfs -lsr / > dfs-v-new-lsr-0.log}}} [[BR]] and compare with {{{dfs-v-old-lsr-1.log}}} 15. Start DFS cluster. [[BR]] {{{bin/start-dfs.sh}}} 16. Run report command: [[BR]] {{{bin/hadoop dfsadmin -report > dfs-v-new-report-1.log}}} [[BR]] and compare with {{{dfs-v-old-report-1.log}}} to ensure all data nodes previously belonging to the cluster are up and running.
