ulysses-you commented on a change in pull request #34568:
URL: https://github.com/apache/spark/pull/34568#discussion_r817530940
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
##########
@@ -180,20 +148,6 @@ object FileFormatWriter extends Logging {
statsTrackers = statsTrackers
)
- // We should first sort by partition columns, then bucket id, and finally
sorting columns.
- val requiredOrdering =
- partitionColumns ++ writerBucketSpec.map(_.bucketIdExpression) ++
sortColumns
- // the sort order doesn't matter
- val actualOrdering = empty2NullPlan.outputOrdering.map(_.child)
Review comment:
There is a issue here, since we have AQE. The plan is the
`AdaptiveSparkPlanExec` who has no `outputOrdering`. For dynamic partition
write, the code will always add an extra sort.
This pr can resolve this issue together.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]