Tonix517 commented on a change in pull request #29000:
URL: https://github.com/apache/spark/pull/29000#discussion_r649580767



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SQLHadoopMapReduceCommitProtocol.scala
##########
@@ -55,7 +55,8 @@ class SQLHadoopMapReduceCommitProtocol(
         // The specified output committer is a FileOutputCommitter.
         // So, we will use the FileOutputCommitter-specified constructor.
         val ctor = clazz.getDeclaredConstructor(classOf[Path], 
classOf[TaskAttemptContext])
-        committer = ctor.newInstance(new Path(path), context)
+        val committerOutputPath = if (dynamicPartitionOverwrite) stagingDir 
else new Path(path)
+        committer = ctor.newInstance(committerOutputPath, context)

Review comment:
       Hey @WinkerDu - thank you for the PR. One question: when 
dynamicPartitionOverwrite is on, this code block will only execute when `clazz` 
is non-null, which means SQLConf.OUTPUT_COMMITTER_CLASS is set. It works for 
parquet files since that SQL property is set at 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L97.
 So what about other file formats, like Orc? There seems to be no such property 
set logic for other file formats, at lease in Spark repo. So is 
dynamicPartitionOverwrite supposed to be for Parquet only? Am I missing sth 
here? Thanks. 
   
   @cloud-fan @Ngone51 @agrawaldevesh




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to