cloud-fan commented on code in PR #52584:
URL: https://github.com/apache/spark/pull/52584#discussion_r2426665205


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala:
##########
@@ -157,6 +153,23 @@ object FileFormatWriter extends Logging {
     val actualOrdering = writeFilesOpt.map(_.child)
       .getOrElse(materializeAdaptiveSparkPlan(plan))
       .outputOrdering
+
+    val requiredOrdering = {

Review Comment:
   OK this is a necessary for the current codebase, but do we really need to do 
it in theory? The planned write should have added the sort already, ideally we 
don't need to try to add sort again here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to