danielcweeks commented on code in PR #5096:
URL: https://github.com/apache/iceberg/pull/5096#discussion_r903035140


##########
aws/src/main/java/org/apache/iceberg/aws/s3/S3FileIO.java:
##########
@@ -241,6 +246,52 @@ private List<String> deleteObjectsInBucket(String bucket, 
Collection<String> obj
     return Lists.newArrayList();
   }
 
+  @Override
+  public Iterator<FileInfo> listPrefix(String prefix) {
+    S3URI s3uri = new S3URI(prefix, 
awsProperties.s3BucketToAccessPointMapping());
+
+    return internalListPrefix(s3uri.bucket(), s3uri.key()).stream()
+        .flatMap(r -> r.contents().stream())
+        .map(o -> new FileInfo(o.key(), o.size(), 
o.lastModified().toEpochMilli())).iterator();
+  }
+
+  /**
+   * This method provides a "best-effort" to delete all objects under the
+   * given prefix.
+   *
+   * Bulk delete operations are used and no reattempt is made for deletes if
+   * they fail, but will log any individual objects that are not deleted as 
part
+   * of the bulk operation.
+   *
+   * @param prefix prefix to delete
+   */
+  @Override
+  public void deletePrefix(String prefix) {

Review Comment:
   I've updated to use iterable everywhere and we'll just reissue the 
underlying listing.
   
   The latest update reuses the existing bulk delete path (which is 
functionally equivalent, but avoid duplication and respects the tagging 
behavior).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to