cloud-fan commented on a change in pull request #25795: [SPARK-29037][Core] 
Spark gives duplicate result when an application was killed
URL: https://github.com/apache/spark/pull/25795#discussion_r325022375
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
 ##########
 @@ -160,11 +160,15 @@ class HadoopMapReduceCommitProtocol(
 
     val taskAttemptContext = new 
TaskAttemptContextImpl(jobContext.getConfiguration, taskAttemptId)
     committer = setupCommitter(taskAttemptContext)
-    committer.setupJob(jobContext)
+    if (!dynamicPartitionOverwrite) {
 
 Review comment:
   OK, so we can't rely on the job cleanup. And ideally we should use different 
staging dir for each job.
   
   That said, seems we can't fix the problem for non-partitioned table if we 
continue to use the hadoop output committer.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to