[
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15635794#comment-15635794
]
Steve Loughran commented on HADOOP-13768:
-----------------------------------------
assuming that the file limit is always1000, why not just list the path in 1000
blocks and issue delete requests in that size. There are ultimate limits to the
size of responses in path listings (max size of an HTTP request), and
inevitably heap problems well before then.
> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> ---------------------------------------------------------------------
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs
> Affects Versions: 3.0.0-alpha2
> Reporter: Genmao Yu
> Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This
> needs to improve {{deleteDirs}} operation to make it pass when more objects
> than the limit to delete.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]