[ 
https://issues.apache.org/jira/browse/HADOOP-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu resolved HADOOP-3627.
--------------------------------------

    Resolution: Won't Fix

Thanks Owen, Doug. I should have explained background of this issue as well. We 
saw this case (running 0.17), when coupled with NameNode dropping request when 
it is under load caused the write to hang. On trunk, namenode would no longer 
drop requests, and we agree that we are not changing semantics of deleted while 
writing. I will close this as wont fix. 

> HDFS allows deletion of file while it is stil open
> --------------------------------------------------
>
>                 Key: HADOOP-3627
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3627
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.0
>            Reporter: Lohit Vijayarenu
>
> This was a single node cluster, so my DFSClient was from same machine. In a 
> terminal I was writing to a HDFS file, while on another terminal deleted the 
> same file. Deletion succeeded, and the write client failed. If the write was 
> still going on, then the next block commit would result in exception saying, 
> the block does not belong to any file. If the write was about to close, then 
> we get an exception in completing a file because getBlocks fails. 
> Should we allow deletion of file? Even if we do, should the write fail?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to