Yes, any machine that has network access to the cluster can read/write to hdfs. 
 It does not need to be part of the cluster or running any hadoop daemons.

Such a client just needs to have hadoop set up on it and the configuration 
details for contacting the namenode.
If using the hadoop command line, this means that the hadoop xml config files 
have to be set up.  If you embed the hadoop jars in your own app, you have to 
provide the config information via files or programatically.

Essentially, the client only needs to know how to contact the namenode.   The 
namenode will automatically tell the hdfs client how to communicate to each 
datanode for storing or getting data.


On 6/14/09 8:54 PM, "Sugandha Naolekar" <sugandha....@gmail.com> wrote:

Hello!

I want to execute all my code on a machine that's remote(not a part of
hadoop cluster).
This code includes ::file transfers between any nodes (remote or within
hadoop cluster or within same LAN)-irrespective.; and HDFS. I will have to
simply write a code for this.

Is it possible?

Thanks,
Regards-

--
Regards!
Sugandha

  • Re: :!! Sugandha Naolekar
    • Re: :!! Scott Carey

Reply via email to