xuanyuanking commented on a change in pull request #29000:
URL: https://github.com/apache/spark/pull/29000#discussion_r459946297



##########
File path: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
##########
@@ -106,15 +106,13 @@ class HadoopMapReduceCommitProtocol(
     val filename = getFilename(taskContext, ext)
 
     val stagingDir: Path = committer match {
-      case _ if dynamicPartitionOverwrite =>
-        assert(dir.isDefined,
-          "The dataset to be written must be partitioned when 
dynamicPartitionOverwrite is true.")
-        partitionPaths += dir.get
-        this.stagingDir
       // For FileOutputCommitter it has its own staging path called "work 
path".
       case f: FileOutputCommitter =>
+        handleDynamicPartitionOverwrite(dir)

Review comment:
       Since we changed the behavior, please also update the comment: 
https://github.com/apache/spark/pull/29000/files#diff-d97cfb5711116287a7655f32cd5675cbR43

##########
File path: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
##########
@@ -106,15 +106,13 @@ class HadoopMapReduceCommitProtocol(
     val filename = getFilename(taskContext, ext)
 
     val stagingDir: Path = committer match {
-      case _ if dynamicPartitionOverwrite =>
-        assert(dir.isDefined,
-          "The dataset to be written must be partitioned when 
dynamicPartitionOverwrite is true.")
-        partitionPaths += dir.get
-        this.stagingDir
       // For FileOutputCommitter it has its own staging path called "work 
path".
       case f: FileOutputCommitter =>
+        handleDynamicPartitionOverwrite(dir)
         new Path(Option(f.getWorkPath).map(_.toString).getOrElse(path))
-      case _ => new Path(path)
+      case _ =>
+        handleDynamicPartitionOverwrite(dir)

Review comment:
       If both case branches need to call `handleDynamicPartitionOverwrite`, 
then we can call it outside case match?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
##########
@@ -106,9 +106,11 @@ case class InsertIntoHadoopFsRelationCommand(
         fs, catalogTable.get, qualifiedOutputPath, matchingPartitions)
     }
 
+    // For SPARK-27194 unit test, we try to set constant jobId carried by 
options
+    val jobId = options.getOrElse("test.jobId", 
java.util.UUID.randomUUID().toString)

Review comment:
       Can we try to reproduce the file collision without adding this extra 
option?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to