[
https://issues.apache.org/jira/browse/HDFS-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493393#comment-13493393
]
Andy Isaacson commented on HDFS-4140:
-------------------------------------
{code}
+ // The file exists, and has non-zero size. We don't want a regular open
+ // for write to truncate the whole file to zero-length-- that would be
+ // bad. But if we just give an error here, a lot of programs will not
+ // run-- even those that never intended to do anything that we don't
+ // support. So we open for append instead. If the program tries to
+ // write to offset 0, it will get an error at that point (we don't
+ // support seek + write.)
+ flags |= O_APPEND;
{code}
I seriously doubt that this is an acceptable solution in the long run, we can't
just pretend that the user meant append.
But, I think we should do this in the short term since it will allow a few
important baseline applications to work correctly.
Please add a reference to this JIRA in the comment.
{code}
+#ifdef FUSE_CAP_ATOMIC_O_TRUNC
...
+ // Unfortunately, this capability is only implemented on Linux 2.6.29 or so.
{code}
I think we should just {{#error hdfs-fuse-dfs requires CAP_ATOMIC_O_TRUNC}}
rather than implementing a mostly-broken codepath that will approximately-never
get tested. Again, add a JIRA reference. Requiring RHEL6 (or equivalent) for
FUSE is a reasonable tradeoff.
LGTM, +1.
> fuse-dfs handles open(O_TRUNC) poorly
> -------------------------------------
>
> Key: HDFS-4140
> URL: https://issues.apache.org/jira/browse/HDFS-4140
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: fuse-dfs
> Affects Versions: 2.0.2-alpha
> Reporter: Andy Isaacson
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4140.003.patch
>
>
> fuse-dfs handles open(O_TRUNC) poorly.
> It is converted to multiple fuse operations. Those multiple fuse operations
> often fail (for example, calling fuse_truncate_impl() while a file is also
> open for write results in a "multiple writers!" exception.)
> One easy way to see the problem is to run the following sequence of shell
> commands:
> {noformat}
> ubuntu@ubu-cdh-0:~$ echo foo > /export/hdfs/tmp/a/t1.txt
> ubuntu@ubu-cdh-0:~$ ls -l /export/hdfs/tmp/a
> total 0
> -rw-r--r-- 1 ubuntu hadoop 4 Nov 1 15:21 t1.txt
> ubuntu@ubu-cdh-0:~$ hdfs dfs -ls /tmp/a
> Found 1 items
> -rw-r--r-- 3 ubuntu hadoop 4 2012-11-01 15:21 /tmp/a/t1.txt
> ubuntu@ubu-cdh-0:~$ echo bar > /export/hdfs/tmp/a/t1.txt
> ubuntu@ubu-cdh-0:~$ ls -l /export/hdfs/tmp/a
> total 0
> -rw-r--r-- 1 ubuntu hadoop 0 Nov 1 15:22 t1.txt
> ubuntu@ubu-cdh-0:~$ hdfs dfs -ls /tmp/a
> Found 1 items
> -rw-r--r-- 3 ubuntu hadoop 0 2012-11-01 15:22 /tmp/a/t1.txt
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira