Hi Users,
I am trying to install Hadoop 0.20.2 on a cluster on two virtual machines. One 
acting as master other as slave.
I am able to ssh from master to slave and vice verse. But when I run 
start-dfs.sh namenode is not starting.
I checked the namenode log it says:
org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: 
Problem binding to sk.r252.0/10.2.252.0:54310 : Cannot assign requested address

10.2.252.0 is the Private IP address of the Virtual machine in the cluster 
(master-sk.r252.0).
Does Hadoop require that all the nodes in the cluster to have separate Pubic IP 
address  to setup hadoop cluster.

Config files:
--masters--
sk.r252.0

--slaves--
sk.r252.0
sk.r252.1

--core-site.xml--
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

--mapred-site.xml--
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

--hdfs-site.xml--
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>

Hostname of master sk.r252.0
   slave sk.r252.1

I am able to ssh from master to slave

--/etc/hosts--
127.0.0.1       localhost.localdomain localhost
10.2.252.0 sk.r252.0
10.2.252.1 sk.r252.1

namenode log:

2012-10-17 16:59:48,599 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = sk.r252.0/10.2.252.0
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; 
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2012-10-17 16:59:50,251 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.net.BindException: Problem binding to sk.r252.0/10.2.252.0:54310 : Cannot 
assign requested address
        at org.apache.hadoop.ipc.Server.bind(Server.java:190)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:191)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind(Native Method)
        at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:137)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
        at org.apache.hadoop.ipc.Server.bind(Server.java:188)
        ... 8 more

2012-10-17 16:59:50,321 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.2.252.0


datanode log:
2012-10-17 16:59:52,134 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = sk.r252.0/10.2.252.0
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; 
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2012-10-17 16:59:56,879 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 0 time(s).
2012-10-17 16:59:59,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 1 time(s).
2012-10-17 17:00:02,369 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 2 time(s).
2012-10-17 17:00:05,118 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 3 time(s).
2012-10-17 17:00:07,866 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 4 time(s).
2012-10-17 17:00:10,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 5 time(s).
2012-10-17 17:00:13,359 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 6 time(s).
2012-10-17 17:00:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 7 time(s).
2012-10-17 17:00:18,846 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 8 time(s).
2012-10-17 17:00:21,590 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: sk.r252.0/10.2.252.0:54310. Already tried 9 time(s).
2012-10-17 17:00:21,597 INFO org.apache.hadoop.ipc.RPC: Server at 
sk.r252.0/10.2.252.0:54310 not available yet, Zzzzz...


Can someone please help me in this issue. What is wrong?

Regards
Sundeep Kambhampati

Reply via email to