Is security on? I'm not entirely sure (and I think it might be illuminating to 
the rest of us when you work this out, so please email back when you do), but I 
am guessing that a code change may be required. I think I remember someone 
telling me that hostnames are reverse-lookup'd to verify identities or some 
such.




________________________________
 From: sodul <s...@odul.com>
To: common-user@hadoop.apache.org 
Sent: Tuesday, October 1, 2013 4:13 AM
Subject: IP based hadoop cluster
 

For various reasons I need to setup hadoop without the need for hostnames or
/etc/hosts files.

I've had a good success configuring mapreduce and hdfs and so far seem to
work properly (the datanodes register to the namenodes, the tasktrackers
register to the node tracker).

My current issue is with forcing the namenode to use an IP address. On the
datanodes setting dfs.datanode.hostname to the IP address worked just fine,
but I cannot find the equivalent for the namenode.

The dfshealth.jsp page shows up properly on the namenode, I'm able to list
the Live Nodes, however, the NameNode name is the host name on that page,
and when I click 'Browse the filesystem' I get forwarded to one of the
datanodes but with &nnaddr=namenode-hostname:54310, which throws and
exception:
java.net.UnknownHostException: namenode-hostname

If I force the url to add the namenode IP address like
&nnaddr=1.2.3.4:54310, then it works.

I've tried to set dfs.datanode.hostname, slave.host.name, dfs.http.address,
fs.default.name, fs.defaultFS ... nothing worked so far.

The version of hadoop is 2.0.0-cdh4.1.2.




--
View this message in context: 
http://hadoop.6.n7.nabble.com/IP-based-hadoop-cluster-tp70191.html
Sent from the common-user mailing list archive at Nabble.com.

Reply via email to