[ 
http://issues.apache.org/jira/browse/HADOOP-375?page=comments#action_12423268 ] 
            
Devaraj Das commented on HADOOP-375:
------------------------------------

The 'started' field is there to just make sure that we do getPort() only after 
the webserver starts up. Since the webserver starts in a new thread, we should 
wait before doing getPort() (to avoid timing issues). Note that 'started' is 
set to false in the field initialization in StatusHttpServer. 'started' is set 
to true only when the webserver has successfully started up (towards the end of 
the 'start' method in StatusHttpServer). Note that if there was an exception 
while starting the webserver, it would throw an exception and 'started' would 
not be touched. Only after the exceptions are handled (e.g. a new port is 
found, etc.) will 'started' be set to true (after it 'breaks' out of the 
outermost 'while' loop inside 'start' method).

The wait(1) is can be replaced with Thread.sleep(1). I meant that and no issues 
with replacing that. The loop seems tight but I expect the webserver to start 
soon enough that it actually makes this loop not that tight (but in any case it 
could be replaced by Thread.sleep(100) or something).

Makes sense?

> Introduce a way for datanodes to register their HTTP info ports with the 
> NameNode
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-375
>                 URL: http://issues.apache.org/jira/browse/HADOOP-375
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.5.0
>            Reporter: Devaraj Das
>         Assigned To: Devaraj Das
>         Attachments: content_browsing.new.patch, content_browsing.new.patch
>
>
> If we have multiple datanodes within a single machine the Jetty servers 
> (other than the first one) won't be able to bind to the fixed HTTP port. So, 
> one solution is to have the datanodes pick a free port (starting from a 
> configured port value) and then inform namenode about it so that the namenode 
> can then do redirects, etc.
> Johan Oskarson reported this problem. 
> If a computer have a second dfs data dir in the config it doesn't start 
> properly because of:
> Exception in thread "main" java.io.IOException: Problem starting http server
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:182)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:170)
>         at 
> org.apache.hadoop.dfs.DataNode.makeInstanceForDir(DataNode.java:1045)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:999)
>         at org.apache.hadoop.dfs.DataNode.runAndWait(DataNode.java:1015)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1066)
> Caused by: org.mortbay.util.MultiException[java.net.BindException: Address 
> already in use]
>         at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>         at org.mortbay.util.Container.start(Container.java:72)
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:159)
>         ... 5 more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to