rdblue commented on a change in pull request #3461:
URL: https://github.com/apache/iceberg/pull/3461#discussion_r744306606



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java
##########
@@ -102,10 +106,13 @@
   private final StructType dsSchema;
   private final Map<String, String> extraSnapshotMetadata;
   private final boolean partitionedFanoutEnabled;
+  private final Distribution requiredDistribution;
+  private final SortOrder[] requiredOrdering;
 
   SparkWrite(SparkSession spark, Table table, SparkWriteConf writeConf,
              LogicalWriteInfo writeInfo, String applicationId,
-             Schema writeSchema, StructType dsSchema) {
+             Schema writeSchema, StructType dsSchema,
+             Distribution requiredDistribution, SortOrder[] requiredOrdering) {

Review comment:
       +1
   
   In general, I think I agree with the choice to implement decisions in the 
write builder and simply pass in the results here. So building the distribution 
and ordering is up to the write builder, and this just returns the required 
distribution and ordering to Spark.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to