[ 
http://issues.apache.org/jira/browse/HADOOP-375?page=comments#action_12423271 ] 
            
Devaraj Das commented on HADOOP-375:
------------------------------------

Some more points:
I think the timing issue requires to be handled when we allow for port 
roll-overs. The webserver may end up binding to a port other than the default 
port and unless the webserver starts up successfully, we won't be able to know 
which port it bound to. If we do a getPort() before the webserver starts up 
fully, then the port that is returned is the port value at which the webserver 
is currently *trying* to bind to (but it is not guaranteed that the attempt 
will be successful). 
If we don't allow port roll-overs (the last argument to StatusHttpServer's 
constructor is false), then the timing issue isn't there since getPort() will 
always (& supposed to) return the default port (in the absence of any 
exception).

> Introduce a way for datanodes to register their HTTP info ports with the 
> NameNode
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-375
>                 URL: http://issues.apache.org/jira/browse/HADOOP-375
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.5.0
>            Reporter: Devaraj Das
>         Assigned To: Devaraj Das
>         Attachments: content_browsing.new.patch, content_browsing.new.patch
>
>
> If we have multiple datanodes within a single machine the Jetty servers 
> (other than the first one) won't be able to bind to the fixed HTTP port. So, 
> one solution is to have the datanodes pick a free port (starting from a 
> configured port value) and then inform namenode about it so that the namenode 
> can then do redirects, etc.
> Johan Oskarson reported this problem. 
> If a computer have a second dfs data dir in the config it doesn't start 
> properly because of:
> Exception in thread "main" java.io.IOException: Problem starting http server
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:182)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:170)
>         at 
> org.apache.hadoop.dfs.DataNode.makeInstanceForDir(DataNode.java:1045)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:999)
>         at org.apache.hadoop.dfs.DataNode.runAndWait(DataNode.java:1015)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1066)
> Caused by: org.mortbay.util.MultiException[java.net.BindException: Address 
> already in use]
>         at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>         at org.mortbay.util.Container.start(Container.java:72)
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:159)
>         ... 5 more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to