try this:
fuse_dfs#dfs://NAMENODE:PORT /mnt fuse usetrash,rw 0 0

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 12, 2012, at 5:05 AM, Stuti Awasthi wrote:

> Hi,
> I modified the /etc/fstab to following :
> fuse_dfs#dfs://slave:54310 /mnt fuse allow_other,rw,usetrash 0 0
> 
> Now I am just getting warnings when I try to mount.
> 
> [root@slave fuse-dfs]# mount /mnt
> port=54310,server=slave
> fuse-dfs didn't recognize /mnt,-2
> fuse-dfs ignoring option allow_other
> fuse-dfs ignoring option dev
> fuse-dfs ignoring option suid
> 
> But getting "Transport endpoint is not connected"
> Output of df-h is :
> [root@slave fuse-dfs]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/vg_slave-lv_root                       50G  4.4G   43G  10% /
> tmpfs                 999M  272K  999M   1% /dev/shm
> /dev/sda1             485M   30M  430M   7% /boot
> /dev/mapper/vg_slave-lv_home                       94G  188M   89G   1% /home
> df: `/mnt': Transport endpoint is not connected
> 
> Thanks
> -----Original Message-----
> From: Stuti Awasthi
> Sent: Thursday, January 12, 2012 5:32 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Unable to mount HDFS using entry in /etc/fstab
> 
> Hi All,
> I am able to mount HDFS using fuse-dfs. I am using 
> http://wiki.apache.org/hadoop/MountableHDFS for the reference.
> Currently able to HDFS mount using fuse_dfs_wrapper.sh and also directly 
> executing fuse_dfs executable.
> 
> Eg:
> [root@slave fuse-dfs]# fuse_dfs -oserver=slave -oport=54310 -oallow_other 
> -ousetrash rw /mnt -d fuse-dfs ignoring option allow_other fuse-dfs didn't 
> recognize /mnt,-2 fuse-dfs ignoring option -d FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000007b
> max_readahead=0x00020000
>   INIT: 7.12
>   flags=0x00000011
>   max_readahead=0x00020000
>   max_write=0x00020000
>   unique: 1, success, outsize: 40
> unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40 statfs /
>   unique: 2, success, outsize: 96
> unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 56 getattr /
>   unique: 3, success, outsize: 120
> 
> Now when I add the entry in /etc/fstab file and try to mount hdfs , I get the 
> following error :
> 
> Entry in /etc/fstab:
> fuse_dfs#dfs://slave:54310 /mnt fuse -oallow_other,rw,-ousetrash 0 0
> 
> [root@slave fuse-dfs]# mount /mnt
> port=54310,server=slave
> fuse-dfs didn't recognize /mnt,-2
> fuse-dfs ignoring option -oallow_other
> fuse-dfs ignoring option -ousetrash
> fuse-dfs ignoring option dev
> fuse-dfs ignoring option suid
> fuse: unknown option `-oallow_other'
> 
> Exported env variable:
> 
> declare -x 
> CLASSPATH="/root/MountHDFS/hadoop-0.20.2/lib/commons-cli-1.2.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-codec-1.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-el-1.0.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-httpclient-3.0.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-logging-1.0.4.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-logging-api-1.0.4.jar:/root/MountHDFS/hadoop-0.20.2/lib/commons-net-1.4.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/core-3.1.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/hsqldb-1.8.0.10.jar:/root/MountHDFS/hadoop-0.20.2/lib/jasper-compiler-5.5.12.jar:/root/MountHDFS/hadoop-0.20.2/lib/jasper-runtime-5.5.12.jar:/root/MountHDFS/hadoop-0.20.2/lib/jets3t-0.6.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/jetty-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/jetty-util-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/junit-3.8.1.jar:/root/MountHDFS/hadoop-0.20.2/lib/kfs-0.2.2.jar:/root/MountHDFS/hadoop-0.20.2/lib/log4j-1.2.15.jar:/root/MountHDFS/hadoop-0.20.2/lib/mockito-all-1.8.0.jar:/root/MountHDFS/hadoop-0.20.2/lib/oro-2.0.8.jar:/root/MountHDFS/hadoop-0.20.2/lib/servlet-api-2.5-6.1.14.jar:/root/MountHDFS/hadoop-0.20.2/lib/slf4j-api-1.4.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/slf4j-log4j12-1.4.3.jar:/root/MountHDFS/hadoop-0.20.2/lib/xmlenc-0.52.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-ant.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-core.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-examples.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-test.jar:/root/MountHDFS/hadoop-0.20.2/hadoop-0.20.2-tools.jar"
> declare -x FUSE_HOME="/usr/include/fuse"
> declare -x HADOOP_HOME="/root/MountHDFS/hadoop-0.20.2"
> eclare -x JAVA_HOME="/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64"
> declare -x 
> LD_LIBRARY_PATH="/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/MountHDFS/hadoop-0.20.2/build/libhdfs:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server/:/lib64/libfuse.so.2:/lib64/libfuse.so"
> declare -x 
> PATH="/usr/include/fuse:/usr/include/fuse.h:/lib64/libfuse.so.2:/lib64:/usr/lib64:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/:/root/MountHDFS/Ant/apache-ant-1.8.2/bin"
> 
> Has anyone also faced same issue. Please suggest me how to fix this.
> 
> Regards,
> Stuti Awasthi
> HCL Comnet Systems and Services Ltd
> F-8/9 Basement, Sec-3,Noida.
> 
> 
> ::DISCLAIMER::
> -----------------------------------------------------------------------------------------------------------------------
> 
> The contents of this e-mail and any attachment(s) are confidential and 
> intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its affiliates. 
> Any views or opinions presented in
> this email are solely those of the author and may not necessarily reflect the 
> opinions of HCL or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, modification, 
> distribution and / or publication of
> this message without the prior written consent of the author of this e-mail 
> is strictly prohibited. If you have
> received this email in error please delete it and notify the sender 
> immediately. Before opening any mail and
> attachments please check them for viruses and defect.
> 
> -----------------------------------------------------------------------------------------------------------------------

Reply via email to