turboFei commented on a change in pull request #25795: [SPARK-29037][Core]
Spark gives duplicate result when an application was killed
URL: https://github.com/apache/spark/pull/25795#discussion_r325020928
##########
File path:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
##########
@@ -160,11 +160,15 @@ class HadoopMapReduceCommitProtocol(
val taskAttemptContext = new
TaskAttemptContextImpl(jobContext.getConfiguration, taskAttemptId)
committer = setupCommitter(taskAttemptContext)
- committer.setupJob(jobContext)
+ if (!dynamicPartitionOverwrite) {
Review comment:
> > I think the staging dir will be cleaned up by
FileOutputCommitter.abortJob().
>
> Why it can't be cleaned when `dynamicPartitionOverwrite=true`?
For the case in PR description, It is happened when appA(static partition
overwrite) is killed and its staging dir is not cleaned up gracefully, then
appB commits parts result of appA.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]