aokolnychyi commented on code in PR #15150:
URL: https://github.com/apache/iceberg/pull/15150#discussion_r2876102547


##########
spark/v4.1/spark/src/main/java/org/apache/iceberg/spark/SparkWriteOptions.java:
##########
@@ -54,6 +54,7 @@ private SparkWriteOptions() {}
   public static final String REWRITTEN_FILE_SCAN_TASK_SET_ID = 
"rewritten-file-scan-task-set-id";
 
   public static final String OUTPUT_SPEC_ID = "output-spec-id";
+  public static final String OUTPUT_SORT_ORDER_ID = "output-sort-order-id";

Review Comment:
   I think what @RussellSpitzer is suggesting will work and will likely 
simplify this PR. I looked through the code that decides Spark ordering based 
on Iceberg ordering and it will be very hard to capture the ID of the sort 
order that influenced the final Spark definition. Given that is not an option, 
then comparing the Spark order to table orders somewhere after the requirements 
are built is the only option.
   
   I would try Russell's idea, this PR is quite involved and it would be great 
to simplify.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to