rdblue commented on a change in pull request #3970:
URL: https://github.com/apache/iceberg/pull/3970#discussion_r791313586



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos

Review comment:
       Actually, no. I think I'd cluster by _spec_id and _partition here. That 
minimizes the number of files. I'd do this as the default (NONE) and when mode 
is HASH.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to