Just to be sure, I verfied the fs.default.name is set in core-site.xml.  The
config files are identical to the other 10 working datanodes as they were
scp'd from a central location.

> more core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoopxxxxx.xx.weather.com:9000/</value>
<final>true</final>
</property>



On 3/19/10 12:06 PM, "Jean-Daniel Cryans" <jdcry...@apache.org> wrote:

> I would argue that it could be the same problem. Most of the users get
> it because the fs.default.name conf is setup wrong when they start the
> cluster the first time so they see it on the namenode first but all
> the other components do the same thing. Can you verify that conf on
> your new node?
> 
> J-D
> 
> On Fri, Mar 19, 2010 at 5:14 AM, SKester <skes...@weather.com> wrote:
>> Google was my first step in trying to track this down.  I found the issue
>> you listed, but that is a namenode related problem.  Our namenode and the
>> other 10 datanodes are just fine.  Only the new box is having this problem.
>> 
>> 
>> On 3/18/10 4:41 PM, "Jean-Daniel Cryans" <jdcry...@apache.org> wrote:
>> 
>>> Google is your friend ;)
>>> 
>>> https://issues.apache.org/jira/browse/HADOOP-5687
>>> 
>>> J-D
>>> 
>>> On Thu, Mar 18, 2010 at 1:29 PM, Scott <skes...@weather.com> wrote:
>>>> We have a working 10 node cluster and are trying to add an 11th box (insert
>>>> Spinal Tap joke here).  The box (Centos Linux) was built in an identical
>>>> manner to the other 10 and has the same version of hadoop (0.20.2).  The
>>>> configs are the exact same as the other nodes.  However when trying to
>>>> start
>>>> the hadoop daemons it throws a NPE.  Here is all that is written to the
>>>> logs.  Any idea whats causing this?
>>>> 
>>>> ************************************************************/
>>>> 2010-03-18 16:09:42,993 INFO
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting DataNode
>>>> STARTUP_MSG:   host = hadoop0b10/192.168.60.100
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 0.20.2
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>>>> 911707; compiled by 'chrisdo' on Fri Feb
>>>> 19 08:07:34 UTC 2010
>>>> ************************************************************/
>>>> 2010-03-18 16:09:43,058 ERROR
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>>> java.lang.NullPointerException
>>>>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:15
>>>> 6)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:16
>>>> 0)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java
>>>> :2
>>>> 46)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:
>>>> 12
>>>> 83)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNod
>>>> e.
>>>> java:1238)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.jav
>>>> a:
>>>> 1246)
>>>>   at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>>>> 
>>>> 2010-03-18 16:09:43,059 INFO
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down DataNode at hadoop0b10/192.168.60.100
>>>> ************************************************************/
>>>> 
>>>> 
>> 
>> 

Reply via email to