Akshat-Jain commented on code in PR #16481:
URL: https://github.com/apache/druid/pull/16481#discussion_r1618168687
##########
extensions-core/s3-extensions/src/main/java/org/apache/druid/storage/s3/output/RetryableS3OutputStream.java:
##########
@@ -103,27 +104,38 @@ public class RetryableS3OutputStream extends OutputStream
private boolean error;
private boolean closed;
- public RetryableS3OutputStream(
- S3OutputConfig config,
- ServerSideEncryptingAmazonS3 s3,
- String s3Key
- ) throws IOException
- {
+ /**
+ * An atomic counter to store number of files pending to be uploaded for the
particular uploadId.
+ */
+ private final AtomicInteger pendingFiles = new AtomicInteger(0);
- this(config, s3, s3Key, true);
- }
+ /**
+ * A lock used for notifying the main thread about the completion of
s3.uploadPart() for all chunks
+ * and hence starting the s3.completeMultipartUpload() for the uploadId.
+ */
+ private final Object fileLock = new Object();
+
+ /**
+ * Helper class for calculating maximum number of simultaneous chunks
allowed on local disk.
+ */
+ private final S3UploadManager uploadManager;
- @VisibleForTesting
- protected RetryableS3OutputStream(
+ /**
+ * A lock to restrict the maximum number of chunks on disk at any given
point in time.
+ */
+ private final Object maxChunksLock = new Object();
Review Comment:
@kfaraz This lock is to make the processing thread wait before writing
further chunks on disk. The `S3UploadManager` would only deal with the upload
threads. This relates to my comment here on the refactoring suggestion:
https://github.com/apache/druid/pull/16481#discussion_r1618143342
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]