-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hey Jason,
        
        I tried to uncomment the piece of code you suggested, rebuild
everything, umount, mount and here is what happens:

Line: 1092 in fuse-dfs.c:

fuse_dfs: ERROR: hdfs trying to rename
/osg/app/robert/deployment-1.3.x/.svn/tmp/entries to
/osg/app/robert/deployment-1.3.x/.svn/entries

which is due to a call to is_protected() at Line 1018. Both in Hadoop
0.19.1.

Thanks for your help!
Robert

jason hadoop wrote:
> In hadoop 0.19.1, (and 19.0) libhdfs (which is used by the fuse package for
> hdfs access) explicitly denies open requests that pass O_RDWR
> 
> If you have binary applications that pass the flag, but would work correctly
> given the limitations of HDFS, you may alter the code in
> src/c++/libhdfs/hdfs.c to allow it, or build a shared library that you
> preload that changes the flags passed to the real open. Hacking hdfs.c is
> much simpler.
> 
> Line 407 of hdfs.c
> 
>     jobject jFS = (jobject)fs;
> 
>     if (flags & O_RDWR) {
>       fprintf(stderr, "ERROR: cannot open an hdfs file in O_RDWR mode\n");
>       errno = ENOTSUP;
>       return NULL;
>     }
> 
> 
> 
> 
> On Fri, May 1, 2009 at 6:34 PM, Philip Zeyliger <phi...@cloudera.com> wrote:
> 
>> HDFS does not allow you to overwrite bytes of a file that have already been
>> written.  The only operations it supports are read (an existing file),
>> write
>> (a new file), and (in newer versions, not always enabled) append (to an
>> existing file).
>>
>> -- Philip
>>
>> On Fri, May 1, 2009 at 5:56 PM, Robert Engel <enge...@ligo.caltech.edu
>>> wrote:
> Hello,
> 
>    I am using Hadoop on a small storage cluster (x86_64, CentOS 5.3,
> Hadoop-0.19.1). The hdfs is mounted using fuse and everything seemed
> to work just fine so far. However, I noticed that I cannot:
> 
> 1) use svn to check out files on the mounted hdfs partition
> 2) request that stdout and stderr of Globus jobs is written to the
> hdfs partition
> 
> In both cases I see following error message in /var/log/messages:
> 
> fuse_dfs: ERROR: could not connect open file fuse_dfs.c:1364
> 
> When I run fuse_dfs in debugging mode I get:
> 
> ERROR: cannot open an hdfs file in O_RDWR mode
> unique: 169, error: -5 (Input/output error), outsize: 16
> 
> My question is if this is a general limitation of Hadoop or if this
> operation is just not currently supported? I searched Google and JIRA
> but could not find an answer.
> 
> Thanks,
> Robert
> 
>>>
>>>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkn/Lr0ACgkQrxCAtr5BXdMZUQCeLEKI2msbgEgQoT0KwihilEKO
7DkAmwSgPmB7Cth/QsFlV3rEAV6wikbf
=MNW6
-----END PGP SIGNATURE-----

Reply via email to