Hi guys : I'm trying to reset my hadoop psuedo-distributed setup on my local machine. I have formatted the namenode.
There are two ways to do this - Option 1) synchronize the datanodes so that the namespace ids are correct (arpit has hinted that this is the solution). Since the datanodes are bad (i.e. the namespace ids are out of sync). Maybe, I can "format" my datanodes ? Or is there some other operation which I can run which would synchronize these namespaces ? Im not sure what files to delete. I tried deleting the data in dfs, but I think this might have broken some other things in my setup. Option 2) Since i have done other things to corrupt my datanodes (i.e. rm -rf on the dfs) I would, ideally, like to start my whole hadoop setup over from scratch, but I'm not sure how to do that. So any feedback on how to "reinstall" hadoop would also probably solve my problem. On Fri, Mar 30, 2012 at 11:28 PM, JAX <jayunit...@gmail.com> wrote: > Thanks alot arpit : I will try this first thing in the morning. > > For now --- I need a glass of wine. > > Jay Vyas > MMSB > UCHC > > On Mar 30, 2012, at 10:38 PM, Arpit Gupta <ar...@hortonworks.com> wrote: > > > the namespace id is persisted on the datanode data directories. As you > formatted the namenode these id's no longer match. > > > > So stop the datanode clean up your dfs.data.dir on your system which > from the logs seems to be "/private/tmp/hadoop-Jpeerindex/dfs/data" and > then start the datanode. > > > > -- > > Arpit Gupta > > Hortonworks Inc. > > http://hortonworks.com/ > > > > On Mar 30, 2012, at 2:33 PM, Jay Vyas wrote: > > > >> Hi guys ! > >> > >> This is very strange - I have formatted my namenode (psuedo distributed > >> mode) and now Im getting some kind of namespace error. > >> > >> Without further ado : here is the interesting output of my logs . > >> > >> > >> Last login: Fri Mar 30 19:29:12 on ttys009 > >> doolittle-5:~ Jpeerindex$ > >> doolittle-5:~ Jpeerindex$ > >> doolittle-5:~ Jpeerindex$ cat Development/hadoop-0.20.203.0/logs/* > >> 2012-03-30 22:28:31,640 INFO > >> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > >> /************************************************************ > >> STARTUP_MSG: Starting DataNode > >> STARTUP_MSG: host = doolittle-5.local/192.168.3.78 > >> STARTUP_MSG: args = [] > >> STARTUP_MSG: version = 0.20.203.0 > >> STARTUP_MSG: build = > >> > http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203-r > >> 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 > >> ************************************************************/ > >> 2012-03-30 22:28:32,138 INFO > org.apache.hadoop.metrics2.impl.MetricsConfig: > >> loaded properties from hadoop-metrics2.properties > >> 2012-03-30 22:28:32,190 INFO > >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >> MetricsSystem,sub=Stats registered. > >> 2012-03-30 22:28:32,191 INFO > >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > >> period at 10 second(s). > >> 2012-03-30 22:28:32,191 INFO > >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics > system > >> started > >> 2012-03-30 22:28:32,923 INFO > >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > ugi > >> registered. > >> 2012-03-30 22:28:32,959 WARN > >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi > already > >> exists! > >> 2012-03-30 22:28:34,478 INFO org.apache.hadoop.ipc.Client: Retrying > connect > >> to server: localhost/127.0.0.1:9000. Already tried 0 time(s). > >> 2012-03-30 22:28:36,317 ERROR > >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: > >> Incompatible namespaceIDs in /private/tmp/hadoop-Jpeerindex/dfs/data: > >> namenode namespaceID = 1829914379; datanode namespaceID = 1725952472 > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:354) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:268) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1480) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1419) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1437) > >> at > >> > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1563) > >> at > >> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1573) > > > -- Jay Vyas MMSB/UCHC