johnclara commented on a change in pull request #1767:
URL: https://github.com/apache/iceberg/pull/1767#discussion_r523707940



##########
File path: aws/src/main/java/org/apache/iceberg/aws/s3/S3OutputStream.java
##########
@@ -87,17 +190,105 @@ public void close() throws IOException {
 
     super.close();
     closed = true;
+    currentStagingFile = null;
 
     try {
       stream.close();
 
+      completeUploads();
+    } finally {
+      stagingFiles.forEach(f -> {
+        if (f.exists() && !f.delete()) {
+          LOG.warn("Could not delete temporary file: {}", f);
+        }
+      });
+    }
+  }
+
+  private void initializeMultiPartUpload() {
+    multipartUploadId = 
s3.createMultipartUpload(CreateMultipartUploadRequest.builder()

Review comment:
       s3a supports canned acls for s3 requests, not sure if this should also 
support them?
   
   My team had to use them once for writing from within account A to a bucket 
owned by bucket B
   
   We also had to add it to rdblue/s3committer:
   
https://github.com/rdblue/s3committer/blob/master/src/main/java/com/netflix/bdp/s3/S3Util.java#L73
   ```
       InitiateMultipartUploadRequest initiateMultipartUploadRequest = new 
InitiateMultipartUploadRequest(bucket, key)
             .withCannedACL(CannedAccessControlList.BucketOwnerFullControl);
       InitiateMultipartUploadResult initiate = 
client.initiateMultipartUpload(initiateMultipartUploadRequest);
   ```
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to