Thanks!!!  That worked!  I guess I can edit the number on the datanodes as well 
but if there is an even more "official" way to resolve this I would be 
interested in hearing about it.

--- On Tue, 11/10/09, Edmund Kohlwey <[email protected]> wrote:

> From: Edmund Kohlwey <[email protected]>
> Subject: Re: Error with replication and namespaceID
> To: [email protected]
> Date: Tuesday, November 10, 2009, 1:46 PM
> Hi Ray,
> You'll probably find that even though the name node starts,
> it doesn't 
> have any data nodes and is completely empty.
> 
> Whenever hadoop creates a new filesystem, it assigns a
> large random 
> number to it to prevent you from mixing datanodes from
> different 
> filesystems on accident. When you reformat the name node
> its FS has one 
> ID, but your data nodes still have chunks of the old FS
> with a different 
> ID and so will refuse to connect to the namenode. You need
> to make sure 
> these are cleaned up before reformatting. You can do it
> just by deleting 
> the data node directory, although there's probably a more
> "official" way 
> to do it.
> 
> 
> On 11/10/09 11:01 AM, Raymond Jennings III wrote:
> > On the actual datanodes I see the following
> exception:  I am not sure what the namespaceID is or
> how to sync them.  Thanks for any advice!
> >
> >
> >
> >
> /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host =
> pingo-3.poly.edu/128.238.55.33
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.1
> > STARTUP_MSG:   build = 
> > http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
> -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC
> 2009
> >
> ************************************************************/
> > 2009-11-09 09:57:45,328 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.IOException: Incompatible namespaceIDs in
> /tmp/hadoop-root/dfs/data: namenode namespaceID =
> 1016244663; datanode namespaceID = 1687029285
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> >          at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> >
> >
> > --- On Mon, 11/9/09, Boris Shkolnik<[email protected]> 
> wrote:
> >
> >    
> >> From: Boris Shkolnik<[email protected]>
> >> Subject: Re: newbie question - error with
> replication
> >> To: [email protected]
> >> Date: Monday, November 9, 2009, 5:02 PM
> >> Make sure you have at least one
> >> datanode running.
> >> Look at the data node log file.
> (logs/*-datanode-*.log)
> >>
> >> Boris.
> >>
> >>
> >> On 11/9/09 7:15 AM, "Raymond Jennings III"<[email protected]>
> >> wrote:
> >>
> >>      
> >>> I am trying to resolve an IOException
> error.  I
> >>>        
> >> have a basic setup and shortly
> >>      
> >>> after running start-dfs.sh I get a:
> >>>
> >>> error: java.io.IOException: File
> >>> /tmp/hadoop-root/mapred/system/jobtracker.info
> could
> >>>        
> >> only be replicated to 0
> >>      
> >>> nodes, instead of 1
> >>> java.io.IOException: File
> >>>        
> >> /tmp/hadoop-root/mapred/system/jobtracker.info
> could
> >>      
> >>> only be replicated to 0 nodes, instead of 1
> >>>
> >>> Any pointers how to resolve this? 
> Thanks!
> >>>
> >>>
> >>>
> >>>
> >>>        
> >>
> >>      
> >
> >
> >    
> 
> 



Reply via email to