[
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17830853#comment-17830853
]
ASF GitHub Bot commented on HADOOP-19047:
-----------------------------------------
shameersss1 commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1538875809
##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java:
##########
@@ -113,15 +122,26 @@ public boolean aboutToComplete(String uploadId,
return false;
}
- public static Map<String, List<SinglePendingCommit>>
getTaskAttemptIdToMpuMetdadataMap() {
- return taskAttemptIdToMpuMetdadataMap;
+ @Override
+ public String toString() {
+ final StringBuilder sb = new StringBuilder(
+ "InMemoryMagicCommitTracker{");
+ sb.append(", Number of
taskAttempts=").append(TASK_ATTEMPT_ID_TO_MPU_METDADATA.size());
+ sb.append(", Number of files=").append(PATH_TO_BYTES_WRITTEN.size());
+ sb.append('}');
+ return sb.toString();
+ }
+
+
+ public static Map<String, List<SinglePendingCommit>>
getTaskAttemptIdToMpuMetdadata() {
Review Comment:
ack.
> Support InMemory Tracking Of S3A Magic Commits
> ----------------------------------------------
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Syed Shameerur Rahman
> Assignee: Syed Shameerur Rahman
> Priority: Major
> Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A
> Magic Committer.
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3
> using PUT operation. Refer
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
> for more information. This is done so that the downstream application like
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
> for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number
> of metadata file in S3 if a single task writes to 'x' files) are read and
> rewritten to S3 as a single metadata file. Refer
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
> for more information
> Since these operations happens with the Task JVM, We could optimize as well
> as save cost by storing these information in memory when Task memory usage is
> not a constraint. Hence the proposal here is to introduce a new MagicCommit
> Tracker called "InMemoryMagicCommitTracker" which will store the
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call
> given a Task writes only 1 file.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]