I also use another solution for the namespace incompatibility which is to run :

  rm -Rf /tmp/hadoop-<username>/ *

  then format the namenode. Hope that helps,


Maha

On Jan 9, 2011, at 9:08 PM, Adarsh Sharma wrote:

> Shuja Rehman wrote:
>> hi
>> 
>> i have format the name node and now when i restart the cluster, i am getting
>> the strange error. kindly let me know how to fix it.
>> thnx
>> 
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = hadoop.zoniversal.com/10.0.3.85
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2+737
>> STARTUP_MSG:   build =  -r 98c55c28258aa6f42250569bd7fa431ac657bdbd;
>> compiled by 'root' on Mon Oct 11 13:14:05 EDT 2010
>> ************************************************************/
>> 2011-01-08 12:55:58,586 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 0 time(s).
>> 2011-01-08 12:55:59,598 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 1 time(s).
>> 2011-01-08 12:56:00,608 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 2 time(s).
>> 2011-01-08 12:56:01,618 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 3 time(s).
>> 2011-01-08 12:56:03,540 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Datanode state: LV = -19 CTime = 1294051643891 is newer than the namespace
>> state: LV = -19 CTime = 0
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:249)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:356)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:272)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1492)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1432)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1450)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1575)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1585)
>> 
>> 2011-01-08 12:56:03,541 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at hadoop.zoniversal.com/10.0.3.85
>> ************************************************************/
>> 2011-01-08 13:04:17,579 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = hadoop.zoniversal.com/10.0.3.85
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2+737
>> STARTUP_MSG:   build =  -r 98c55c28258aa6f42250569bd7fa431ac657bdbd;
>> compiled by 'root' on Mon Oct 11 13:14:05 EDT 2010
>> ************************************************************/
>> 2011-01-08 13:04:19,028 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 0 time(s).
>> 2011-01-08 13:04:20,038 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 1 time(s).
>> 2011-01-08 13:04:21,049 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 2 time(s).
>> 2011-01-08 13:04:22,060 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: /10.0.3.85:8020. Already tried 3 time(s).
>> 2011-01-08 13:04:24,601 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data:
>> namenode namespaceID = 125812142; datanode namespaceID = 1083940884
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:356)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:272)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1492)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1432)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1450)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1575)
>>        at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1585)
>> 
>> 2011-01-08 13:04:24,602 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at hadoop.zoniversal.com/10.0.3.85
>> 
>> 
>>  
> Manually delete
> 
> /var/lib/hadoop-0.20/cache/hdfs/dfs/data
> 
> diretcory of all nodes and then format and start the cluster.
> 
> This error occurs due to incompatibily in the metadata.
> 
> 
> Best Regards
> 
> Adarsh Sharma

Reply via email to