[ 
http://issues.apache.org/jira/browse/HADOOP-375?page=comments#action_12422677 ] 
            
Devaraj Das commented on HADOOP-375:
------------------------------------

The way I am thinking of doing this is to add a new field in 
DatanodeRegistration class called infoPort. The infoPort is set to the value 
that the jetty could bind to (after retrials if applicable). This becomes known 
to the namenode as a part of the registration process. The namenode puts the 
port in a (new) field of the DatanodeInfo object that it creates for every 
datanode. Later on, anyone wishing to contact the jetty on a specific datanode 
(say, for redirecting the user to a datanode containing a particular data 
block) can do so by looking at the port number obtained from the DatanodeInfo 
object (for e.g., LocatedBlock contains an array of DatanodeInfo objects where 
a given block can be found). Makes sense?

> Introduce a way for datanodes to register their HTTP info ports with the 
> NameNode
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-375
>                 URL: http://issues.apache.org/jira/browse/HADOOP-375
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.5.0
>            Reporter: Devaraj Das
>         Assigned To: Devaraj Das
>
> If we have multiple datanodes within a single machine the Jetty servers 
> (other than the first one) won't be able to bind to the fixed HTTP port. So, 
> one solution is to have the datanodes pick a free port (starting from a 
> configured port value) and then inform namenode about it so that the namenode 
> can then do redirects, etc.
> Johan Oskarson reported this problem. 
> If a computer have a second dfs data dir in the config it doesn't start 
> properly because of:
> Exception in thread "main" java.io.IOException: Problem starting http server
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:182)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:170)
>         at 
> org.apache.hadoop.dfs.DataNode.makeInstanceForDir(DataNode.java:1045)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:999)
>         at org.apache.hadoop.dfs.DataNode.runAndWait(DataNode.java:1015)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1066)
> Caused by: org.mortbay.util.MultiException[java.net.BindException: Address 
> already in use]
>         at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>         at org.mortbay.util.Container.start(Container.java:72)
>         at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:159)
>         ... 5 more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to