[ 
https://issues.apache.org/jira/browse/HDFS-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674150#comment-13674150
 ] 

Steve Loughran commented on HDFS-4872:
--------------------------------------

bq. Just mark delete idempotent. A delete retry may delete an object that has 
been recreated or replaced between the retries in this case.

This isn't idempotent, so doesn't meet the requirement of the JIRA.

Inode-driven delete could work. You do have to get that inode ID first, but you 
can then be confident that there is no change to that file between that 
(presumably retrying) operation and the delete call. It's also testable with 
ease: create a file, get the inode, overwrite the file, delete by inode, expect 
failure. 

One funny is what if the file by inode is moved -but you don't want to delete a 
file if it has been moved.

that would imply the operation shouldn't be {{delete(inode)}} but instead 
{{delete(path, inode)}}.
                
> Idempotent delete operation.
> ----------------------------
>
>                 Key: HDFS-4872
>                 URL: https://issues.apache.org/jira/browse/HDFS-4872
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.0.4-alpha
>            Reporter: Konstantin Shvachko
>
> Making delete idempotent is important to provide uninterrupted job execution 
> in case of HA failover.
> This is to discuss different approaches to idempotent implementation of 
> delete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to