I would argue that it could be the same problem. Most of the users get
it because the fs.default.name conf is setup wrong when they start the
cluster the first time so they see it on the namenode first but all
the other components do the same thing. Can you verify that conf on
your new node?

J-D

On Fri, Mar 19, 2010 at 5:14 AM, SKester <skes...@weather.com> wrote:
> Google was my first step in trying to track this down.  I found the issue
> you listed, but that is a namenode related problem.  Our namenode and the
> other 10 datanodes are just fine.  Only the new box is having this problem.
>
>
> On 3/18/10 4:41 PM, "Jean-Daniel Cryans" <jdcry...@apache.org> wrote:
>
>> Google is your friend ;)
>>
>> https://issues.apache.org/jira/browse/HADOOP-5687
>>
>> J-D
>>
>> On Thu, Mar 18, 2010 at 1:29 PM, Scott <skes...@weather.com> wrote:
>>> We have a working 10 node cluster and are trying to add an 11th box (insert
>>> Spinal Tap joke here).  The box (Centos Linux) was built in an identical
>>> manner to the other 10 and has the same version of hadoop (0.20.2).  The
>>> configs are the exact same as the other nodes.  However when trying to start
>>> the hadoop daemons it throws a NPE.  Here is all that is written to the
>>> logs.  Any idea whats causing this?
>>>
>>> ************************************************************/
>>> 2010-03-18 16:09:42,993 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting DataNode
>>> STARTUP_MSG:   host = hadoop0b10/192.168.60.100
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>>> 911707; compiled by 'chrisdo' on Fri Feb
>>> 19 08:07:34 UTC 2010
>>> ************************************************************/
>>> 2010-03-18 16:09:43,058 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>>> java.lang.NullPointerException
>>>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:2
>>> 46)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:12
>>> 83)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.
>>> java:1238)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:
>>> 1246)
>>>   at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>>>
>>> 2010-03-18 16:09:43,059 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at hadoop0b10/192.168.60.100
>>> ************************************************************/
>>>
>>>
>
>

Reply via email to