You have to upgrade both name node and data node. Better issue start-dfs.sh -upgrade.
Check whether current and previous directories are present in both dfs.namenode.name.dir and dfs.datanode.data.dir directory. On 9/18/14, sam liu <[email protected]> wrote: > Hi Expert, > > Below are my steps and is it a hadoop bug or did I miss any thing? Thanks! > > Step: > [A] Upgrade > 1. Install Hadoop 2.2.0 cluster > 2. Stop Hadoop services > 3. Replace 2.2.0 binaries with 2.4.1 binaries > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode > 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh > start namenode -upgrade > 6. Start secondary namenode, tasktracker and jobtracker > > Result: > > Whole upgrade process could be completed successfully. > > [B] Rollback > 1. Stop all hadoop services > 2. Replace 2.4.1 binaries with 2.2.0 binaries > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode > 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh > start namenode -rollback > > Result: > > Namenode service could be started > Datanodes failed with exception: > Issue: DataNode failed with following exception > 2014-09-17 11:04:51,416 INFO > org.apache.hadoop.hdfs.server.common.Storage: Lock on > /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public > 2014-09-17 11:04:51,418 FATAL > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id ) > service to hostname/ip:9000 > org.apache.hadoop.hdfs.server.common.IncorrectVersionException: > Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55. > Expecting = -47. > at > org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302) > at > org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921) >
