Hi Steven,

I'm not Hadoop expert but i think this is very common case. If you your
machine outside cluster ( let call it X ) will fetch files from your cluster
regularly than the best idea is to configure hadoop on this machine X to
talk with your master nodes. You need to install hadoop packages on this
machine X and set in your config in /etc/hadoop things like: <name>
fs.default.name</name> etc.

If every thing will be correct you will just run commands like :

> hadoop fs -ls /tmp

and hadoop on that machine X will know from his config that he has to go to
name node in your cluster.
You can also specify cluster using path like hdfs://

> hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile

which is handy when you have 2 clusters or so.
Check documentation for your verion:
http://hadoop.apache.org/common/docs/r0.20.0/hdfs_shell.html

Cheers


On 30 September 2010 19:30, Steven Wong <sw...@netflix.com> wrote:

> hadoop fs -get or -getmerge or -cat or ...
>
>
> -----Original Message-----
> From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
> Sent: Thursday, September 30, 2010 5:02 AM
> To: hive-user@hadoop.apache.org
> Subject: Read/write into HDFS
>
> Dear all,
> I have set up a Hadoop cluster of 10 nodes.
> I want to know that how we can read/write file from HDFS (simple).
> Yes I know there are commands, i read the whole HDFS commands.
> bin/hadoop -copyFromLocal tells that the file should be in localfilesystem.
>
> But I want to know that how we can read these files from  the cluster.
> What are the different ways to read files from HDFS.
> Can a extra node ( other than the cluster nodes )  read file from the
> cluster.
> If yes , how?
>
> Thanks in Advance
> *
>
>

Reply via email to