So Michael Noll's tutorial page has the following tips for the error
you are facing.

http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)#java.io.IOException:_Incompatible_namespaceIDs

Abhishek

On Wed, Feb 10, 2010 at 12:57 PM, Nick Klosterman
<[email protected]> wrote:
> It appears I have incompatible namespaceIDs. Any thoughts on how to resolve
> that?
> This is what the full datanodes log is saying:
>
> 2010-02-10 15:25:09,125 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = potr134pc26/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
> ************************************************************/
> 2010-02-10 15:25:13,785 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in
> /home/hadoop/hadoop-datastore/hadoop-hadoop/dfs/data: namenode namespaceID =
> 2082816383; datanode namespaceID = 1109869136
>        at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>
> 2010-02-10 15:25:13,786 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at potr134pc26/127.0.0.1
> ************************************************************/
>
>
> On Wed, 10 Feb 2010, Nick Klosterman wrote:
>
>> I've been following Michael Noll's Single  node cluster tutorial but am
>> unable to run the wordcount example successfully.
>>
>> It appears that I'm having some sort of problem involving the nodes. Using
>> copyFromLocal fails to replicate the data across 1 node.
>> When I try to look at the hadoop web interfaces I see that there aren't
>> any nodes (alive or dead) listed.
>>
>> After I start things this is what I get from dfsadmin -report
>> /usr/local/hadoop/bin$ ./hadoop dfsadmin -report
>> Configured Capacity: 0 (0 KB)
>> Present Capacity: 0 (0 KB)
>> DFS Remaining: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> DFS Used%: %
>> Under replicated blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>>
>> -------------------------------------------------
>> Datanodes available: 0 (0 total, 0 dead)
>>
>>
>> Here are the commands I'm entering and the output of them:
>>
>> /usr/local/hadoop/bin$ ./start-all.sh
>> starting namenode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-potr134pc26.out
>> localhost: starting datanode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-potr134pc26.out
>> localhost: starting secondarynamenode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-potr134pc26.out
>> starting jobtracker, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-potr134pc26.out
>> localhost: starting tasktracker, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-potr134pc26.out
>>
>> /usr/local/hadoop/bin$ jps
>> 24440 SecondaryNameNode
>> 24626 TaskTracker
>> 24527 JobTracker
>> 24218 NameNode
>> 24725 Jps
>>
>> ---> I had all ready created the txtinput directory with ./hadoop dfs
>> -mkdir txtinput
>>
>> /usr/local/hadoop/bin$ ./hadoop dfs -copyFromLocal
>> /home/hadoop/Desktop/*.txt txtinput
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/txtinput/20417.txt could only be replicated to 0 nodes, instead
>> of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:739)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy0.addBlock(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>        at $Proxy0.addBlock(Unknown Source)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>>
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/hadoop/txtinput/20417.txt" - Aborting...
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/hadoop/txtinput/7ldvc10.txt could only be replicated to 0 nodes,
>> instead of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:739)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy0.addBlock(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>        at $Proxy0.addBlock(Unknown Source)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>>
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null
>> 10/02/10 15:29:38 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/user/hadoop/txtinput/7ldvc10.txt" - Aborting...
>> copyFromLocal: java.io.IOException: File /user/hadoop/txtinput/20417.txt
>> could only be replicated to 0 nodes, instead of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> java.io.IOException: File /user/hadoop/txtinput/7ldvc10.txt could only be
>> replicated to 0 nodes, instead of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> /usr/local/hadoop/bin$ ./hadoop jar ../hadoop-0.20.1-examples.jar
>> wordcount txtinput txtoutput
>>
>> The last command just ends up sitting there doing nothing with no output.
>> Any help getting the nodes up and running would be appreciated.
>>
>> Thanks,
>> Nick
>>
>

Reply via email to