amogh-jahagirdar commented on a change in pull request #4052:
URL: https://github.com/apache/iceberg/pull/4052#discussion_r810413269



##########
File path: aws/src/main/java/org/apache/iceberg/aws/s3/S3FileIO.java
##########
@@ -100,6 +114,66 @@ public void deleteFile(String path) {
     client().deleteObject(deleteRequest);
   }
 
+  /**
+   * Deletes the given paths in a batched manner.
+   * <p>
+   * The paths are grouped by bucket, and deletion is triggered when we either 
reach the configured batch size
+   * or have a final remainder batch for each bucket.
+   *
+   * @param paths paths to delete
+   */
+  @Override
+  public void deleteFiles(Iterable<String> paths) {
+    SetMultimap<String, String> bucketToObjects = 
Multimaps.newSetMultimap(Maps.newHashMap(), Sets::newHashSet);
+    List<String> failedDeletions = Lists.newArrayList();
+    for (String path : paths) {
+      S3URI location = new S3URI(path);
+      String bucket = location.bucket();
+      String objectKey = location.key();
+      Set<String> objectsInBucket = bucketToObjects.get(bucket);
+      if (objectsInBucket.size() == awsProperties.s3FileIoDeleteBatchSize()) {
+        List<String> failedDeletionsForBatch = deleteObjectsInBucket(bucket, 
objectsInBucket);
+        failedDeletions.addAll(failedDeletionsForBatch);
+        bucketToObjects.removeAll(bucket);
+      }
+      bucketToObjects.get(bucket).add(objectKey);
+    }
+    // Delete the remainder
+    List<List<String>> remainderFailedObjects = bucketToObjects
+        .asMap()
+        .entrySet()
+        .stream()
+        .map(entry -> deleteObjectsInBucket(entry.getKey(), entry.getValue()))
+        .collect(Collectors.toList());
+
+    remainderFailedObjects.forEach(failedDeletions::addAll);
+    if (!failedDeletions.isEmpty()) {
+      throw new S3BatchDeletionException(String.format("Failed to delete %d 
objects. Failed objects: %s",

Review comment:
       Agree on generic exception, but I think we probably want a different 
name because it's not necessary that deleteFiles(files) corresponds to batch 
deletion for other fileIO implementations. the default implementation is to 
iteratively delete files one by one. So we could define a broader 
FileIODeleteException? It's not something we need to add to the interface as 
that would be pretty limiting, but implementations can take a call to throw or 
not?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to