Ted-

You'll need to run `yum search fuse` on Redhat based systems or `apt-cache
search fuse` on Debian based systems to find the FUSE packages, e.g.

$ yum search fuse
fuse.x86_64 : File System in Userspace (FUSE) utilities
fuse-clamfs.x86_64 : FUSE-based user-space file system for Linux with
on-access anti-virus file scanning
fuse-cryptofs.x86_64 : FUSE-based user-space encrypted filesystem
fuse-curlftpfs.x86_64 : FUSE filesystem for accessing FTP hosts using
libcurl
fuse-davfs2.x86_64 : FUSE-Filesystem to access WebDAV servers
fuse-devel.x86_64 : Header files, libraries and development documentation
for fuse.
fuse-encfs.x86_64 : Encrypted pass-thru filesystem in userspace
fuse-hpafs.x86_64 : FUSE based filesystem to access the Hidden Protected
Area (HPA) on disk
fuse-iso.x86_64 : FUSE module to mount ISO filesystem images
fuse-ntfs-3g.x86_64 : Linux NTFS userspace driver
fuse-ntfs-3g-devel.x86_64 : Header files, libraries and development
documentation for fuse-ntfs-3g.
fuse-obexfs.x86_64 : FUSE based filesystem using ObexFTP
fuse-smb.x86_64 : FUSE-Filesystem to fast and easy access remote resources
via SMB
fuse-sshfs.x86_64 : FUSE-Filesystem to access remote filesystems via SSH
fuse-unionfs.x86_64 : FUSE-base user-space union filesystem

For Hadoop, you should be able to compile with just "fuse" and "fuse-devel"
installed.  You can ignore the other packages.

You could also just download FUSE directly from
http://fuse.sourceforge.net/if you wanted.

Good luck!

-Matt

On Mon, Sep 7, 2009 at 2:44 PM, Ted Yu <[email protected]> wrote:

> I tried to compile fuse-dfs. libhdfs.so has been compiled.
>
> Under hadoop/src/contrib/fuse-dfs:
> ant -Dlibhdfs=1 -Dfusedfs=1
>
> Then I got:
>     [exec] make[1]: Entering directory
> `/usr/local/hadoop/src/contrib/fuse-dfs/src'
>     [exec] if gcc -DPACKAGE_NAME=\"fuse_dfs\"
> -DPACKAGE_TARNAME=\"fuse_dfs\" -DPACKAGE_VERSION=\"0.1.0\"
> -DPACKAGE_STRING=\"fuse_dfs\ 0.1.0\" -DPACKAGE_BUGREPORT=\"\"
> -DGETGROUPS_T=gid_t -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1
> -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1
> -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1
> -DHAVE_GETGROUPS=1 -DGETGROUPS_T=gid_t  -I. -I.  -DPERMS=1
> -D_FILE_OFFSET_BITS=64 -I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/include
> -I/usr/local/hadoop/src/c++/libhdfs/
> -I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/include/linux/
> -D_FUSE_DFS_VERSION=\"0.1.0\" -DPROTECTED_PATHS=\"\" -I/include   -Wall -O3
> -MT fuse_dfs.o -MD -MP -MF ".deps/fuse_dfs.Tpo" -c -o fuse_dfs.o
> fuse_dfs.c;
> \
>     [exec]     then mv -f ".deps/fuse_dfs.Tpo" ".deps/fuse_dfs.Po"; else rm
> -f ".deps/fuse_dfs.Tpo"; exit 1; fi
>     [exec] In file included from fuse_dfs.c:19:
>     [exec] fuse_dfs.h:31:18: error: fuse.h: No such file or directory
>     [exec] fuse_dfs.h:32:27: error: fuse/fuse_opt.h: No such file or
> directory
>     [exec] In file included from fuse_dfs.c:20:
>
> Where can I find fuse_opt.h and fuse.h ?
>
> Thanks
>
> On Mon, Sep 7, 2009 at 12:08 PM, Brian Bockelman <[email protected]
> >wrote:
>
> > Hey Ted,
> >
> > It's hard to avoid copying files, unless if you are able to change your
> > application to talk to HDFS directly (and even then, there are a lot of
> > "gotchas" that you wouldn't have to put up with at an application level
> --
> > look at the Chukwa paper).
> >
> > I would advise looking at Chukwa, http://wiki.apache.org/hadoop/Chukwa,
> > and then rotating logfiles quickly.
> >
> > Facebook's Scribe is supposed to do this sort of stuff too (and is very
> > impressive), but I'm not familiar with it.  On face value, it appears
> that
> > it might take more effort to get scribe well-integrated, but it would
> have
> > more functionality.
> >
> > Brian
> >
> >
> > On Sep 7, 2009, at 4:18 AM, Ted Yu wrote:
> >
> >  We're using hadoop 0.20.0 to analyze large log files from web servers.
> >> I am looking for better HDFS support so that I don't have to copy log
> >> files
> >> from Linux File System over.
> >>
> >> Please comment.
> >>
> >> Thanks
> >>
> >
> >
>

Reply via email to