Hi Elena, FUSE-DFS is extremely picky about hostnames. All of the following should have the exact same string:
- Output of "hostname" on the namenode. - fs.default.name - Primary reverse-DNS of the namenode's IP. "localhost" is almost certainly not what you want. Brian On Jun 7, 2011, at 9:47 AM, elena.otero wrote: > > Hello everyone: > > I have installed hadoop 0.20 (Single node) on a ubuntu 11.04 64bit. > I have been successful on compiling fuse-dfs according to MountableHDFS > wiki. > The fuse version that I'm using is the one that is bundled with Natty. > When I run ./fuse_dfs_wrapper.sh dfs://localhost:9000 ../mnt -d (from > HADOOP_HOME/src/contrib/fuse-dfs/src/) this is what comes up: > > port=9000,server=localhost > fuse-dfs didn't recognize /..../mnt/,-2 //Apparently it's just a > warning. It shouldn't matter > fuse-dfs ignoring option -d //Apparently it's just a > warning. It shouldn't matter > > Then a bunch of stuff like this: > > unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56 > getattr / > unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 56 > getattr / > > which seems correct > > Also some of this: > > unique: 11, opcode: LOOKUP (1), nodeid: 1, insize: 52 > LOOKUP /autorun.inf > getattr /autorun.inf > unique: 11, error: -2 (No such file or directory), outsize: 16 > > Which seems to overcome somehow because it doesn't get stuck or exit. Until > this one: > > unique: 147, opcode: LOOKUP (1), nodeid: 1, insize: 52 > LOOKUP /autorun.inf > getattr /autorun.inf > unique: 147, error: -2 (No such file or directory), outsize: 16 > > It just gets stuck and I have to ctrl+C. I haven't been able to go any > further. > The aspect of the half-mounted directory is: > > -rw-r--r-- 1 ***** ***** 13366 2010-02-19 08:55 ********.txt > drwxr-xr-x 3 ***** **** 4096 2011-06-07 15:32 logs > d????????? ? ? ? ? ? mnt > -rw-r--r-- 1 ***** ***** 101 2010-02-19 08:55 ********.txt > -rw-r--r-- 1 ***** ***** 1366 2010-02-19 08:55 *******.txt > > To try again, I have to unmount first (umount /.../mnt). If not, it says > that the transport endpoint is not connected which seems obvious because the > operation didn't finish successfully. > > > I have done some researching on the web and haven't found anything on the > same line except for: > > http://www.mail-archive.com/common-user@hadoop.apache.org/msg02351.html > > I did what's suggested: > > fuse_dfs -oserver=127.0.0.1 -oport=9000 /dfs -oallow_other - > ordbuffer=131072 > > But I got the same result > > Any ideas? > Did it happen to somebody else? > > Thank you in advance. > > > Elena. > > > > > > -- > View this message in context: > http://old.nabble.com/error--2-%28No-such-file-or-directory%29-when-mounting-fuse-dfs-tp31792099p31792099.html > Sent from the Hadoop core-user mailing list archive at Nabble.com.