HttpFS may be an easier solution for you as all traffic goes through the
same machine. WebHDFS and HttpFS use the same REST API.

Thx.


On Mon, Mar 11, 2013 at 10:58 AM, Steven Matthews <
steven.matth...@rapidinsightinc.com> wrote:

> I'm writing some test code to try out the WebHDFS REST API for potentially
> adding functionality to a program. If I want to create and write to a file
> using the API, I would send the initial create request and I would get back
> a location of a datanode as the documentation says I should. The location
> I'm getting back however is a name and not an IP. Now if had this name
> registered in my DNS server or put in an entry in my hosts file then I can
> successfully make the 2nd create call using that name and I would have just
> created a file.  Is there any reason why it doesn't respond with an IP
> address? Is this something that it could be configured to do? In a
> fully-distributed Hadoop setup this would seem like a big pain to register
> and maintain all the datanode names and their IPs for machines across the
> internet just so I can resolve a name.
>
> -Steven
>



-- 
Alejandro

Reply via email to