Hi Eli, Thank you for your reply.
The version of HADOOP that I am using is HADOOP-0.20.0. I executed the following command with term A: ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d ■term A [r...@host03 fuse-dfs]# ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d port=8020,server=drbd-test-vm03 fuse-dfs didn't recognize /mnt/hdfs,-2 fuse-dfs ignoring option -d unique: 1, opcode: INIT (26), nodeid: 0, insize: 56 INIT: 7.8 flags=0x00000003 max_readahead=0x00020000 INIT: 7.8 flags=0x00000001 max_readahead=0x00020000 max_write=0x00020000 unique: 1, error: 0 (Success), outsize: 40 unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 2, error: 0 (Success), outsize: 112 unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 3, error: 0 (Success), outsize: 112 unique: 4, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 4, error: 0 (Success), outsize: 112 unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 5, error: 0 (Success), outsize: 112 unique: 6, opcode: OPENDIR (27), nodeid: 1, insize: 48 unique: 6, error: 0 (Success), outsize: 32 unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 7, error: 0 (Success), outsize: 112 unique: 8, opcode: READDIR (28), nodeid: 1, insize: 64 unique: 8, error: 0 (Success), outsize: 120 unique: 9, opcode: RELEASEDIR (29), nodeid: 1, insize: 64 unique: 9, error: 0 (Success), outsize: 16 unique: 10, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 10, error: 0 (Success), outsize: 112 unique: 11, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 11, error: 0 (Success), outsize: 112 unique: 12, opcode: OPENDIR (27), nodeid: 1, insize: 48 unique: 12, error: 0 (Success), outsize: 32 unique: 13, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 13, error: 0 (Success), outsize: 112 unique: 14, opcode: READDIR (28), nodeid: 1, insize: 64 unique: 14, error: 0 (Success), outsize: 104 unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64 unique: 15, error: 0 (Success), outsize: 16 ■term B [r...@host03 fuse-dfs]# ls /mnt/hdfs/ ls: reading directory /mnt/hdfs/: Input/output error When I executed the ls command with term B, the following result was displayed with term A. unique: 10, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 10, error: 0 (Success), outsize: 112 unique: 11, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 11, error: 0 (Success), outsize: 112 unique: 12, opcode: OPENDIR (27), nodeid: 1, insize: 48 unique: 12, error: 0 (Success), outsize: 32 unique: 13, opcode: GETATTR (3), nodeid: 1, insize: 40 unique: 13, error: 0 (Success), outsize: 112 unique: 14, opcode: READDIR (28), nodeid: 1, insize: 64 unique: 14, error: 0 (Success), outsize: 104 unique: 15, opcode: RELEASEDIR (29), nodeid: 1, insize: 64 unique: 15, error: 0 (Success), outsize: 16 I didn't understand the cause because the error didn't occur even if it started by debug mode. Are there anything else I should check? Best regards, Tadashi > -----Original Message----- > From: Eli Collins [mailto:e...@cloudera.com] > Sent: Tuesday, January 05, 2010 5:58 PM > To: hdfs-user@hadoop.apache.org > Subject: Re: fuse-dfs > > Hey Tadashi, > > What version of hadoop are you using? What is the debug output if you > just execute the following in one term and the ls in the other? > > ./fuse_dfs_wrapper.sh dfs://drbd-test-vm03:8020 /mnt/hdfs -d > > Thanks, > Eli > > 2010/1/4 <tate...@nttdata.co.jp>: > > Hi, > > > > I get the following error when trying to mount the fuse dfs. > > > > [fuse-dfs]$ ./fuse_dfs_wrapper.sh -d dfs://drbd-test-vm03:8020 /mnt/hdfs/ > > port=8020,server=drbd-test-vm03 > > fuse-dfs didn't recognize /mnt/hdfs,-2 > > [fuse-dfs]$ df > > Filesystem 1K-blocks Used Available Use% Mounted on > > /dev/mapper/VolGroup00-LogVol00 > > 9047928 5934252 2650076 70% / > > /dev/xvda1 101086 13230 82637 14% /boot > > tmpfs 1048576 0 1048576 0% /dev/shm > > fuse 9043968 0 9043968 0% /mnt/hdfs > > [fuse-dfs]$ ls -ltr /mnt/hdfs/ > > total 0 > > ?--------- ? ? ? ? ? t.class > > [fuse-dfs]$ ls -ltr /mnt/hdfs/ > > ls: reading directory /mnt/hdfs/: Input/output error total 0 > > > > We use Red hat Enterprise linux 5 update 2, > kernel-xen-2.6.18-92.1.17.0.2.el5,kernel-headers-2.6.18-92.1.17.0.2.el5, > > kernel-xen-devel-2.6.18-92.1.17.0.2.el5, > > hadoop-0.20.0,and fuse-2.7.4. > > > > I am not sure what is the reason for this error. > > What should I do to avoid this? > > Does anyone know what I am doing wrong or what could be causing these > > errors? > > > > Best regards, > > Tadashi > >