Samrat002 commented on code in PR #27187:
URL: https://github.com/apache/flink/pull/27187#discussion_r2934669348


##########
flink-filesystems/flink-s3-fs-native/src/main/java/org/apache/flink/fs/s3native/NativeS3BulkCopyHelper.java:
##########
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3native;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.core.fs.ICloseableRegistry;
+import org.apache.flink.core.fs.PathsCopyingFileSystem;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.transfer.s3.S3TransferManager;
+import software.amazon.awssdk.transfer.s3.model.CompletedCopy;
+import software.amazon.awssdk.transfer.s3.model.DownloadFileRequest;
+import software.amazon.awssdk.transfer.s3.model.FileDownload;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * Helper class for performing bulk S3 to local file system copies using 
S3TransferManager.
+ *
+ * <p><b>Concurrency Model:</b> Uses batch-based concurrency control with 
{@code
+ * maxConcurrentCopies} to limit parallel downloads. The current 
implementation waits for each batch
+ * to complete before starting the next batch. A future enhancement could use 
a bounded thread pool
+ * (e.g., {@link java.util.concurrent.Semaphore} or bounded executor) to allow 
continuous submission
+ * of new downloads as slots become available, which would provide better 
throughput by avoiding the
+ * "slowest task in batch" bottleneck.
+ *
+ * <p><b>Retry Handling:</b> Relies on the S3TransferManager's built-in retry 
mechanism for
+ * transient failures. If a download fails after retries:
+ *
+ * <ul>
+ *   <li>The entire bulk copy operation fails with an IOException
+ *   <li>Successfully downloaded files are NOT cleaned up (they remain on disk)
+ *   <li>Partial downloads may leave incomplete files that should be cleaned 
up by the caller
+ * </ul>
+ *
+ * <p><b>Cleanup:</b> No automatic cleanup is performed on failure. Callers 
are responsible for
+ * cleaning up destination files if the bulk copy fails. Consider wrapping in 
a try-finally or using
+ * a temp directory that can be deleted on failure.
+ *
+ * <p><b>TODO:</b> Consider extracting URI parsing logic to a shared 
S3UriUtils utility class to
+ * consolidate S3 URI handling across the codebase.
+ */
+@Internal
+public class NativeS3BulkCopyHelper {
+
+    private static final Logger LOG = 
LoggerFactory.getLogger(NativeS3BulkCopyHelper.class);
+
+    private final S3TransferManager transferManager;
+    private final int maxConcurrentCopies;
+
+    public NativeS3BulkCopyHelper(S3TransferManager transferManager, int 
maxConcurrentCopies) {
+        this.transferManager = transferManager;
+        this.maxConcurrentCopies = maxConcurrentCopies;
+    }
+
+    /**
+     * Copies files from S3 to local filesystem in batches.
+     *
+     * @param requests List of copy requests (source S3 path to destination 
local path)
+     * @param closeableRegistry Registry for cleanup (currently unused, 
reserved for future use)
+     * @throws IOException if any copy operation fails
+     */
+    public void copyFiles(
+            List<PathsCopyingFileSystem.CopyRequest> requests, 
ICloseableRegistry closeableRegistry)
+            throws IOException {
+
+        if (requests.isEmpty()) {
+            return;
+        }
+
+        LOG.info("Starting bulk copy of {} files using S3TransferManager", 
requests.size());
+
+        List<CompletableFuture<CompletedCopy>> copyFutures = new ArrayList<>();
+
+        for (int i = 0; i < requests.size(); i++) {
+            PathsCopyingFileSystem.CopyRequest request = requests.get(i);
+            String sourceUri = request.getSource().toUri().toString();
+            if (sourceUri.startsWith("s3://") || 
sourceUri.startsWith("s3a://")) {
+                copyFutures.add(copyS3ToLocal(request));
+            } else {
+                throw new UnsupportedOperationException(
+                        "Only S3 to local copies are currently supported: " + 
sourceUri);
+            }

Review Comment:
   > Shouldn't we wait for the the already started futures to complete?
   
   good catch 🙌🏻 . this was an important safety issue. i have addressed it 
   
   > So that for example the calling code can delete the destination after the 
upload.
   (maybe we can try to cancel if that's possible)
   
   Cancellation is Problematic for S3 Downloads , here is why 
   S3TransferManager's FileDownload streams data directly to disk in real-time. 
Once a download starts writing to the destination file, it's difficult to 
cancel mid-stream safely. Cancelling a partial download leaves an incomplete 
file on disk that must be cleaned up. The caller would still need to detect 
which files are incomplete and handle cleanup logic
   
   Choosing to wait for all in-flight futures to complete rather than cancel 
because:
   
   Error cases are rare in normal operation, all URIs should be valid before 
the method is called
   Data integrity. waiting ensures files are either fully written or not at all 
(no partial/corrupted files)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to