cloud-fan commented on a change in pull request #25795: [SPARK-29037][Core] 
Spark gives duplicate result when an application was killed
URL: https://github.com/apache/spark/pull/25795#discussion_r324998240
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
 ##########
 @@ -160,11 +160,15 @@ class HadoopMapReduceCommitProtocol(
 
     val taskAttemptContext = new 
TaskAttemptContextImpl(jobContext.getConfiguration, taskAttemptId)
     committer = setupCommitter(taskAttemptContext)
-    committer.setupJob(jobContext)
+    if (!dynamicPartitionOverwrite) {
 
 Review comment:
   We need to add comments to explain it. It looks to me that the hadoop output 
committer doesn't support concurrent writing to the same directory by design, 
so there is nothing we can do at Spark side.
   
   The fix here is to avoid using the hadoop output committer when 
`dynamicPartitionOverwrite=true`. I'm fine with this fix.
   
   BTW, when writing partitioned table with `dynamicPartitionOverwrite=false`, 
can we support it as well?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to