Hi

I am setting up a hadoop cluster with 5 slaves and a master, after the single node installation it worked fine, but after I went to a multinode cluster the namenode prints this message even after formatting the hadoop namenode: see below
I also added the config files.

The folders I am using exist on all nodes and the hadoop folder is placed also on all nodes in the same folder and the config files are all the same.

I suppose it is some stupid error I did, as I am quite new to hadoop.

Regards
Mathias

hduser@POISN-server:/usr/local/hadoop$ bin/hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.

12/04/27 10:39:52 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = POISN-server/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
************************************************************/
Re-format filesystem in /app/hadoop/name ? (Y or N) y
Format aborted in /app/hadoop/name
12/04/27 10:39:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at POISN-server/127.0.1.1
************************************************************/
hduser@POISN-server:/usr/local/hadoop$ bin/hadoop namenode
Warning: $HADOOP_HOME is deprecated.

12/04/27 10:40:04 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = POISN-server/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
************************************************************/
12/04/27 10:40:04 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 12/04/27 10:40:04 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 12/04/27 10:40:04 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 12/04/27 10:40:04 INFO impl.MetricsSystemImpl: NameNode metrics system started 12/04/27 10:40:05 INFO impl.MetricsSourceAdapter: MBean for source ugi registered. 12/04/27 10:40:05 WARN impl.MetricsSystemImpl: Source name ugi already exists! 12/04/27 10:40:05 INFO impl.MetricsSourceAdapter: MBean for source jvm registered. 12/04/27 10:40:05 INFO impl.MetricsSourceAdapter: MBean for source NameNode registered.
12/04/27 10:40:05 INFO util.GSet: VM type       = 64-bit
12/04/27 10:40:05 INFO util.GSet: 2% max memory = 17.77875 MB
12/04/27 10:40:05 INFO util.GSet: capacity      = 2^21 = 2097152 entries
12/04/27 10:40:05 INFO util.GSet: recommended=2097152, actual=2097152
12/04/27 10:40:05 INFO namenode.FSNamesystem: fsOwner=hduser
12/04/27 10:40:05 INFO namenode.FSNamesystem: supergroup=supergroup
12/04/27 10:40:05 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/04/27 10:40:05 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/04/27 10:40:05 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 12/04/27 10:40:05 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 12/04/27 10:40:05 INFO namenode.NameNode: Caching file names occuring more than 10 times 12/04/27 10:40:05 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:325) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
...

12/04/27 10:40:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at POISN-server/127.0.1.1
************************************************************/


hduser@POISN-server:/usr/local/hadoop/conf$ cat *-site.xml

core-site.xml  hdfs-site.xml  mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://ClusterMaster:9000</value>
</property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/app/hadoop/name</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/app/hadoop/data</value>
</property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>mapred.job.tracker</name>
<value>ClusterMaster:9001</value>
</property>

<property>
<name>mapred.system.dir</name>
<value>/app/hadoop/system</value>
</property>

<property>
<name>mapred.local.dir</name>
<value>/app/hadoop/local</value>
</property>

</configuration>

taskcontroller.cfg:
mapred.local.dir=/app/hadoop/local
hadoop.log.dir=
mapred.tasktracker.tasks.sleeptime-before-sigkill=#sleep time before sig kill is to be sent to process group after sigterm is sent. Should be in seconds
mapreduce.tasktracker.group=hadoop


Reply via email to