zhengchenyu commented on code in PR #37346:
URL: https://github.com/apache/spark/pull/37346#discussion_r2435447191
##########
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala:
##########
@@ -125,22 +126,27 @@ class HadoopMapReduceCommitProtocol(
taskContext: TaskAttemptContext, dir: Option[String], spec:
FileNameSpec): String = {
val filename = getFilename(taskContext, spec)
- val stagingDir: Path = committer match {
- // For FileOutputCommitter it has its own staging path called "work
path".
- case f: FileOutputCommitter =>
- if (dynamicPartitionOverwrite) {
- assert(dir.isDefined,
- "The dataset to be written must be partitioned when
dynamicPartitionOverwrite is true.")
- partitionPaths += dir.get
- }
- new Path(Option(f.getWorkPath).map(_.toString).getOrElse(path))
- case _ => new Path(path)
- }
+ if (forceUseStagingDir && !dynamicPartitionOverwrite) {
Review Comment:
when `spark.sql.hive.convertMetastoreParquet` or
`spark.sql.hive.convertMetastoreOrc` is `false`, mean use hive serde. We also
call `newTaskTempFileAbsPath`. Here will trigger rename. I suspect this is a
conflict with hive serde logic.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]