Datanode does not start up if the local machines DNS isnt working right and 
dfs.datanode.dns.interface==default
---------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-3426
                 URL: https://issues.apache.org/jira/browse/HADOOP-3426
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.16.3
         Environment: Ubuntu 8.04, at home, no reverse DNS
            Reporter: Steve Loughran
            Priority: Minor


This is the third Java project I've been involved in that doesnt work on my 
home network, due to implementation issues with  
java.net.InetAddress.getLocalHost(), issues that only show up on an unamanged 
network. Fortunately my home network exists to find these problems early.

In hadoop, if the local hostname doesnt resolve, the datanode does not start up:

Caused by: java.net.UnknownHostException: k2: k2
at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:184)
at org.apache.hadoop.dfs.DataNode.(DataNode.java:162)
at org.apache.hadoop.dfs.ExtDataNode.(ExtDataNode.java:55)
at 
org.smartfrog.services.hadoop.components.datanode.DatanodeImpl.sfStart(DatanodeImpl.java:60)

While this is a valid option in a production (non-virtual) cluster, if you are 
playing with VMWare/Xen private networks or on a home network, you can't rely 
on DNS. 

1. In these situations, its usually better to fall back to using "localhost" or 
127.0.0.1 as a hostname if Java can't work it out for itself,
2. Its often good to cache this if used in lots of parts of the system, 
otherwise the 30s timeouts can cause problems of their own.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to