[ 
https://issues.apache.org/jira/browse/HADOOP-3485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12602497#action_12602497
 ] 

Pete Wyckoff commented on HADOOP-3485:
--------------------------------------

Good question - the problem is that in some situations, fuse seems to do 
something more like:

create
release
open
write
flush
release.

So, although we already store the file pointer in the fuse_file_info structure, 
there's nothing we can do if fuse tells us to close the file.

And note, the fuse_file_info pointer passed into open doesn't include the 
original fuse_file_info data. So, even not closing it on the first release 
won't help - unless we cached the file handle ourselves.

If you know how we can get around fuse doing this, it would be very helpful.

-- pete


> implement writes
> ----------------
>
>                 Key: HADOOP-3485
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3485
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: contrib/fuse-dfs
>            Reporter: Pete Wyckoff
>            Priority: Minor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Doesn't support writes because fuse protocol first creates the file then 
> closes it and re-opens it to start writing to it. So, (until hadoop-1700), we 
> need a work around.
> One way to do this is to open the file with overwrite flag on the second 
> open. For security, would only want to do this for zero length files (could 
> even check the creation ts too, but because of clock skew, that may be 
> harder).
> Doug, Craig, Nicholas - Comments?
> -- pete
> ps since mostly already implemented, this should be a very quick patch

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to