[ 
https://issues.apache.org/jira/browse/HADOOP-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12608087#action_12608087
 ] 

Owen O'Malley commented on HADOOP-3627:
---------------------------------------

I think the semantics is actually the best that we can do with the current 
protocols. I wouldn't want a windows-like semantics where a writer anywhere can 
keep you from deleting a file. I think it would make sense to introduce 
fileid's at some point so that renames while you are writing work in the 
unix-style, with the name and contents being completely separate from each 
other. That is a much bigger change to the name node though...

I'd propose that we make this wont-fix.

> HDFS allows deletion of file while it is stil open
> --------------------------------------------------
>
>                 Key: HADOOP-3627
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3627
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.0
>            Reporter: Lohit Vijayarenu
>
> This was a single node cluster, so my DFSClient was from same machine. In a 
> terminal I was writing to a HDFS file, while on another terminal deleted the 
> same file. Deletion succeeded, and the write client failed. If the write was 
> still going on, then the next block commit would result in exception saying, 
> the block does not belong to any file. If the write was about to close, then 
> we get an exception in completing a file because getBlocks fails. 
> Should we allow deletion of file? Even if we do, should the write fail?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to