[ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886460#comment-16886460
 ] 

Steve Loughran commented on HADOOP-16430:
-----------------------------------------

h3. s3guard + local + single file delete

{code}
Starting: Rename s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/src 
to s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final
2019-07-16 20:53:49,226 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:close(87)) - Rename 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/src to 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final: duration 
9:39.621s
2019-07-16 20:53:49,227 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles 
(ITestS3ADeleteManyFiles.java:testBulkRenameAndDelete(78)) - Effective rename 
bandwidth 0.000146 MB/s
2019-07-16 20:57:31,942 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:<init>(72)) - Starting: Delete 
subtree s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final
2019-07-16 21:03:58,417 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:close(87)) - Delete subtree 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final: duration 
6:26.475s
2019-07-16 21:03:58,417 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles 
(ITestS3ADeleteManyFiles.java:testBulkRenameAndDelete(101)) - Timer per object 
deletion 38.648592 milliseconds
2019-07-16 21:03:59,556 [teardown] INFO  contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:describe(255)) - closing file system
{code}

h3. s3guard + local + bulk delete

{code}
2019-07-16 21:09:12,297 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:<init>(72)) - Starting: Rename 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/src to 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final
2019-07-16 21:12:54,386 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:close(87)) - Rename 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/src to 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final: duration 
3:42.089s
2019-07-16 21:12:54,386 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles 
(ITestS3ADeleteManyFiles.java:testBulkRenameAndDelete(78)) - Effective rename 
bandwidth 0.000382 MB/s
2019-07-16 21:16:43,691 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:<init>(72)) - Starting: Delete 
subtree s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final
2019-07-16 21:17:19,875 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles (DurationInfo.java:close(87)) - Delete subtree 
s3a://hwdev-steve-ireland-new/test/testBulkRenameAndDelete/final: duration 
0:36.183s
2019-07-16 21:17:19,875 [JUnit-testBulkRenameAndDelete] INFO  
scale.ITestS3ADeleteManyFiles 
(ITestS3ADeleteManyFiles.java:testBulkRenameAndDelete(101)) - Timer per object 
deletion 3.618443 milliseconds
{code}

So again, 10x speedup in delete time during tree delete. For rename, the 
differences are smaller; and even with nearly empty files, the overhead of the 
copy overwhelms that of the delete.

If we really wanted to speed things up, we could try doing the single/bulk 
delete call async, so that for each max-of-5K file list, we can spin off up to 
5 delete calls and wait for them to finish. At the same time, this'd lead to a 
massive spike in parallelised DDB writes; though as they'd go through the same 
pool it'd be limited by pool capacity

> S3AFilesystem.delete to incrementally update s3guard with deletions
> -------------------------------------------------------------------
>
>                 Key: HADOOP-16430
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16430
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: Screenshot 2019-07-16 at 22.08.31.png
>
>
> Currently S3AFilesystem.delete() only updates the delete at the end of a 
> paged delete operation. This makes it slow when there are many thousands of 
> files to delete ,and increases the window of vulnerability to failures
> Preferred
> * after every bulk DELETE call is issued to S3, queue the (async) delete of 
> all entries in that post.
> * at the end of the delete, await the completion of these operations.
> * inside S3AFS, also do the delete across threads, so that different HTTPS 
> connections can be used.
> This should maximise DDB throughput against tables which aren't IO limited.
> When executed against small IOP limited tables, the parallel DDB DELETE 
> batches will trigger a lot of throttling events; we should make sure these 
> aren't going to trigger failures



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to