The property "dfs.name.dir" allows you to control where Hadoop writes NameNode metadata.

You should have a property like

<property>
<name>dfs.name.dir</name>
<value>/data/zhang/hadoop/name/data</value>
</property>

to make sure the NameNode data isn't being deleted when you delete the files in /tmp.

-Matt


On Jun 26, 2009, at 2:33 PM, Boyu Zhang wrote:

Matt,

Thanks a lot for your reply! I did formatted the namenode. But I got the same error again. And actually I successfully run the example jar file once, but after that one time, I couldn't get it run again. I clean the / tmp dir every time before I format namenode again(I am just testing it, so I don't worry about losing data:). Still, I got the same error when I execute the bin/start-dfs.sh . I checked my conf, and I can't figure out why. Here is my
conf file:

I really appreciate if you could take a look at it. Thanks a lot.


<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://hostname1:9000</value>
</property>


<property>
<name>mapred.job.tracker</name>
<value>hostname2:9001</value>
</property>



<property>
 <name>dfs.data.dir</name>
 <value>/data/zhang/hadoop/dfs/data</value>
<description>Determines where on the local filesystem an DFS data node
 should store its blocks.  If this is a comma-delimited
 list of directories, then data will be stored in all named
 directories, typically on different devices.
 Directories that do not exist are ignored.
 </description>
</property>


<property>
 <name>mapred.local.dir</name>
 <value>/data/zhang/hadoop/mapred/local</value>
 <description>The local directory where MapReduce stores intermediate
 data files.  May be a comma-separated list of
 directories on different devices in order to spread disk i/o.
 Directories that do not exist are ignored.
 </description>
</property>
</configuration>


-----Original Message-----
From: Matt Massie [mailto:m...@cloudera.com]
Sent: Friday, June 26, 2009 4:31 PM
To: core-user@hadoop.apache.org
Subject: Re: Error in Cluster Startup: NameNode is not formatted

Boyu-

You didn't do anything stupid.  I've forgotten to format a NameNode
too myself.

If you check the QuickStart guide at
http://hadoop.apache.org/core/docs/current/quickstart.html
 you'll see that formatting the NameNode is the first of the
Execution section (near the bottom of the page).

The command to format the NameNode is:

hadoop namenode -format

A warning though, you should only format your NameNode once.  Just
like formatting any filesystem, you can loss data if you (re)format.

Good luck.

-Matt

On Jun 26, 2009, at 1:25 PM, Boyu Zhang wrote:

Hi all,

I am a student and I am trying to install the Hadoop on a cluster, I
have
one machine running namenode, one running jobtracker, two slaves.

When I run the /bin/start-dfs.sh , there is something wrong with my
namenode, it won't start. Here is the error message in the log file:

ERROR org.apache.hadoop.fs.FSNamesystem: FSNamesystem initialization
failed.
java.io.IOException: NameNode is not formatted.
      at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:243)
      at
org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
      at
org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
      at
org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
      at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
      at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
      at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
      at
org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
      at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)


I think it is something stupid i did, could somebody help me out?
Thanks a
lot!


Sincerely,

Boyu Zhang



Reply via email to