You can use the hadoop command line utility to access the file system. You
can specify the namenode to use via the hadoop fs -fs property.

$ hadoop fs --help
-help: Unknown command
Usage: java FsShell
           [-ls <path>]
           [-lsr <path>]
           [-df [<path>]]
           [-du <path>]
           [-dus <path>]
           [-count[-q] <path>]
           [-mv <src> <dst>]
           [-cp <src> <dst>]
           [-rm [-skipTrash] <path>]
           [-rmr [-skipTrash] <path>]
           [-expunge]
           [-put <localsrc> ... <dst>]
           [-copyFromLocal <localsrc> ... <dst>]
           [-moveFromLocal <localsrc> ... <dst>]
           [-get [-ignoreCrc] [-crc] <src> <localdst>]
           [-getmerge <src> <localdst> [addnl]]
           [-cat <src>]
           [-text <src>]
           [-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]
           [-moveToLocal [-crc] <src> <localdst>]
           [-mkdir <path>]
           [-setrep [-R] [-w] <rep> <path/file>]
           [-touchz <path>]
           [-test -[ezd] <path>]
           [-stat [format] <path>]
           [-tail [-f] <file>]
           [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
           [-chown [-R] [OWNER][:[GROUP]] PATH...]
           [-chgrp [-R] GROUP PATH...]
           [-help [cmd]]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|jobtracker:port>    specify a job tracker
-files <comma separated list of files>    specify comma separated files to
be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files
to include in the classpath.
-archives <comma separated list of archives>    specify comma separated
archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

Here's more details with examples

http://hadoop.apache.org/common/docs/r0.18.3/hdfs_shell.html

Alternatively you can look into using Fuse to mount dfs as a standard file
system

http://wiki.apache.org/hadoop/MountableHDFS

On 10/17/10 10:06 PM, "[email protected]"
<[email protected]> wrote:

> From: siddharth raghuvanshi <[email protected]>
> Date: Sat, 16 Oct 2010 21:46:49 +0530
> To: <[email protected]>
> Subject: Unable to access hdfs file system from command terminal
> 
> Hi,
> 
> Can I access the hadoop filesystem from the terminal like
>  hdfs://cs-sy-230.cse.iitkgp.ernet.in:54310/user/user/blog-hadoop
> 
> It should be noted that I am able to open the following link using firefox
> web browser
> http://cs-sy-230.cse.iitkgp.ernet.in:50075/browseDirectory.jsp?dir=%2Fuser%2Fu
> ser%2Fblog-hadoop&namenodeInfoPort=50070
> 
> Regards
> Siddharth


iCrossing Privileged and Confidential Information
This email message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information of iCrossing. Any unauthorized 
review, use, disclosure or distribution is prohibited. If you are not the 
intended recipient, please contact the sender by reply email and destroy all 
copies of the original message.


Reply via email to