[ 
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15135998#comment-15135998
 ] 

Thomas Demoor commented on HADOOP-12292:
----------------------------------------

Thanks [[email protected]] for adding the test and committing. Saw it this 
afternoon and launched a test run. By the time I got home, it was committed. 
For the record, the test came out fine vs EU-west1 on my end:

{{ Running org.apache.hadoop.fs.s3a.scale.TestS3ADeleteFilesOneByOne
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 666.261 sec - 
in org.apache.hadoop.fs.s3a.scale.TestS3ADeleteFilesOneByOne }}

I think it would make our life easier to backport HADOOP-11684 in branch-2. It 
fixes an obvious bug (see f.i. HADOOP-12319). Can we add a deprecation warning 
or take other actions to make it branch-2 material?

> Make use of DeleteObjects optional
> ----------------------------------
>
>                 Key: HADOOP-12292
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12292
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Thomas Demoor
>            Assignee: Thomas Demoor
>             Fix For: 2.8.0
>
>         Attachments: HADOOP-12292-001.patch, HADOOP-12292-002.patch, 
> HADOOP-12292-003.patch, HADOOP-12292-004.patch, HADOOP-12292-005.patch, 
> HADOOP-12292-branch-2-005.patch
>
>
> The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was 
> added later. This patch allows one to configure s3a to replace each 
> multidelete request by consecutive single deletes. Evidently, this setting is 
> disabled by default as this causes slower deletes.
> The main motivation is to enable legacy S3-compatible object stores to make 
> the transition from s3n (which does not use multidelete) to s3a, fully 
> allowing the planned s3n deprecation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to