Hey Keith,

I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
check), is not what is your local IP on that machine (or rather, it
isn't the machine you intended to start it on)?

Not sure if EC2 grants static IPs, but otherwise a change in the
assigned IP (checkable via ifconfig) would probably explain the
"Cannot assign" error received when we tried a bind() syscall.

On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <[email protected]> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, 
> but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] 
> java.net.BindException: Cannot assign requested address; For more details 
> see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the namenode 
> to start any processes, only starting the namenode or datanode should do 
> that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the 
> security group has full inbound access.  I can ssh between all three machines 
> (client/master/slave) without a password ala authorized_keys.  I can ping the 
> master node from the client machine (although I don't know how to ping a 
> specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 
> which makes port testing a little difficult.
>
> Any ideas?
>
> ________________________________________________________________________________
> Keith Wiley     [email protected]     keithwiley.com    
> music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
> ________________________________________________________________________________
>



--
Harsh J

Reply via email to