Hey All,

I'm able to start my master server, but none of the slave nodes come up
(unless I list the master as the slave).  After searching a bit, seems
people have this problem when they forget to set df.default.name, but i've
got it set in core-site.xml (listed below).  They all have the error below
on start up: 

STARTUP_MSG: Starting DataNode 
STARTUP_MSG:   host = slave1/192.168.0.234 
STARTUP_MSG:   args = [] 
STARTUP_MSG:   version = 0.20.0 
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504;
compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009 
************************************************************/ 
2009-06-18 09:06:49,369 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
java.lang.NullPointerException 
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134) 
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156) 
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160) 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:246)
 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216) 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
 
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368) 

2009-06-18 09:06:49,370 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at slave1/192.168.0.234
************************************************************/

==============================================
core-site.xml
==============================================

<property> 
   <name>fs.default.name</name> 
   <value>hdfs://master:54310</value> 
   <description>The name of the default file system.  A URI whose 
   scheme and authority determine the FileSystem implementation.  The 
   uri's scheme determines the config property (fs.SCHEME.impl) naming 
   the FileSystem implementation class.  The uri's authority is used to 
   determine the host, port, etc. for a filesystem.</description> 
</property> 
<property> 
  <name>hadoop.tmp.dir</name> 
  <value>/data/hadoop-0.20.0-${user.name}</value> 
  <description>A base for other temporary directories.</description> 
</property> 
-- 
View this message in context: 
http://www.nabble.com/Upgrading-from-.19-to-.20-problems-tp24095348p24095348.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to