[ 
https://issues.apache.org/jira/browse/HADOOP-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191840#comment-17191840
 ] 

Steve Loughran commented on HADOOP-17244:
-----------------------------------------

This bug is is in {{DeleteOperation.deleteDirectoryTree()}} BTW...not any of 
the new code. 


Currently we naively list all entries and queue for delete in pages, and 
there's clearly a big assumption there: dir markers don't have entries under 
them.

What to change? 

I'm going to to go with being less incremental

-files get deleted from S3Guard incrementally
-dir markers are not, instead at end of delete the existing cleanup will do its 
thing



> ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory 
> test failure on -Dauth
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17244
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17244
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to