kfaraz commented on code in PR #19009:
URL: https://github.com/apache/druid/pull/19009#discussion_r2793822140


##########
extensions-core/s3-extensions/src/main/java/org/apache/druid/storage/s3/S3DataSegmentKiller.java:
##########
@@ -103,13 +103,9 @@ public void kill(List<DataSegment> segments) throws 
SegmentLoadingException
           k -> new ArrayList<>()
       );
       if (path.endsWith("/")) {
-        // segment is not compressed, list objects and add them all to delete 
list
-        final ListObjectsV2Result list = s3Client.listObjectsV2(
-            new 
ListObjectsV2Request().withBucketName(s3Bucket).withPrefix(path)
-        );
-        for (S3ObjectSummary objectSummary : list.getObjectSummaries()) {
-          keysToDelete.add(new 
DeleteObjectsRequest.KeyVersion(objectSummary.getKey()));
-        }
+        // segment is not compressed, list all objects with pagination and add 
them to delete list

Review Comment:
   On a second glance, @abannon , I wonder if this change is even necessary. I 
am not sure if there can be a case where we have more than 1000 files for a 
single segment. Have you encountered such a case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to