[
https://issues.apache.org/jira/browse/HADOOP-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346077#comment-16346077
]
Steve Loughran commented on HADOOP-15191:
-----------------------------------------
HADOOP-15191 patch 003
* fix checkstyle
* add "op_bulk_delete" to the counters
* S3A Distcp test to verify that the bulk delete operation was invoked, so
verifying that this path works
* ITestS3ABulkOperations tweaked to handle tests which delete a directory path,
to behave differenty under S3Guard. The docs for the (internal) bulk API say
'all entries must be files''; S3Guard gets inconsistent if you delete a path
which has a child entry, whereas S3 classic ignores the request (and if HDFS
supported this, would fail with an IOE).
For a public/stable API, we'd need to worry about that problem "deletion of a
parent entry"; for s3guard a preflight check of every file not being a dir
would be the strategy. Not free, and you'd have to decide what to do: ignore vs
reject. Here, because distcp is only deleting files, there's no problem.
Testing: S3 US East, scale +-S3guard, +-Auth. Also tested Azure's distcp test,
to verify I haven't broken it in my changes (it shouldn't, as it now adds a new
operation. But I wanted to make sure I hadn't found a bug there either)
There are some aspects of s3Guard integration here which I think could be
improved, but that's something to tune once I do the handling of partial delete
failure and S3guard, which is the real issue and which matters much more than
rename(). Otherwise, I think this is ready for people to look at
+[~sanjay.radia]
> Add Private/Unstable BulkDelete operations to supporting object stores for
> DistCP
> ---------------------------------------------------------------------------------
>
> Key: HADOOP-15191
> URL: https://issues.apache.org/jira/browse/HADOOP-15191
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3, tools/distcp
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15191-001.patch, HADOOP-15191-002.patch,
> HADOOP-15191-003.patch
>
>
> Large scale DistCP with the -delete option doesn't finish in a viable time
> because of the final CopyCommitter doing a 1 by 1 delete of all missing
> files. This isn't randomized (the list is sorted), and it's throttled by AWS.
> If bulk deletion of files was exposed as an API, distCP would do 1/1000 of
> the REST calls, so not get throttled.
> Proposed: add an initially private/unstable interface for stores,
> {{BulkDelete}} which declares a page size and offers a
> {{bulkDelete(List<Path>)}} operation for the bulk deletion.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]