shameersss1 commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477749474
##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java:
##########
@@ -264,9 +326,14 @@ public void abortTask(TaskAttemptContext context) throws
IOException {
try (DurationInfo d = new DurationInfo(LOG,
"Abort task %s", context.getTaskAttemptID());
CommitContext commitContext = initiateTaskOperation(context)) {
- getCommitOperations().abortAllSinglePendingCommits(attemptPath,
- commitContext,
- true);
+ if (isTrackMagicCommitsInMemoryEnabled(context.getConfiguration())) {
+ List<SinglePendingCommit> pendingCommits =
loadPendingCommitsFromMemory(context);
+ for (SinglePendingCommit singleCommit : pendingCommits) {
+ commitContext.abortSingleCommit(singleCommit);
+ }
+ } else {
+ getCommitOperations().abortAllSinglePendingCommits(attemptPath,
commitContext, true);
Review Comment:
AFIK, Spark calls abortTask from the same process (executor), When the job
fails, The abortJob operation is called which basically lists all the pending
uploads and aborts it as mentioned in the comment
[here](https://github.com/apache/hadoop/pull/6468#issuecomment-1926348440)
I am not sure why would a different process call abortTask, The driver
process should ideally call abortJob if a job fails.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]