[jira] [Commented] (HADOOP-13768) AliyunOSS: handle deleteDirs reliably when too many objects to delete

2017-01-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831797#comment-15831797
 ] 

Genmao Yu commented on HADOOP-13768:


[~ste...@apache.org] Make sense. I made a mistake, and it is not possible to 
delete more than 1000 objects. Maybe, this jira should be to handle another 
issue, i.e. the failure in the batch delete operation. I will update the patch 
as soon as possible.

> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle deleteDirs reliably when too many objects to delete

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635794#comment-15635794
 ] 

Steve Loughran commented on HADOOP-13768:
-

assuming that the file limit is  always1000, why not just list the path in 1000 
blocks and issue delete requests in that size. There are ultimate limits to the 
size of responses in path listings (max size of an HTTP request), and 
inevitably heap problems well before then.

> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle deleteDirs reliably when too many objects to delete

2016-11-02 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631420#comment-15631420
 ] 

Genmao Yu commented on HADOOP-13768:


get it

> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle deleteDirs reliably when too many objects to delete

2016-11-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631263#comment-15631263
 ] 

Kai Zheng commented on HADOOP-13768:


Looking at the following codes:
{code}
+List deleteFailed = keysToDelete;
+while(CollectionUtils.isNotEmpty(deleteFailed)) {
+  List l = new ArrayList<>();
+  List smallerLists = Lists.partition(deleteFailed, 1000);
+  for (List smallerList : smallerLists) {
+DeleteObjectsRequest deleteRequest =
+new DeleteObjectsRequest(bucketName);
+deleteRequest.setKeys(smallerList);
+deleteRequest.setQuiet(true);
+DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+l.addAll(result.getDeletedObjects());
+  }
+  deleteFailed = l;
+  tries++;
+  if (tries == retry) {
+break;
+  }
+}
{code}

1. Please give {{l}} a more readable name.
2. Can you give some comments to explain some bit about the procedure? I 
(probably others) wouldn't know why it's like that without querying the SDK's 
manual. I know it now, there're 2 modes in the {{ossClient.deleteObjects}} 
operation, one mode to return successfully deleted objects, and the other 
returning the deleting-failed objects. You're using the latter, and use a loop 
to try some times to delete and delete the failed-to-delete objects.

> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org