rdblue commented on a change in pull request #3970:
URL: https://github.com/apache/iceberg/pull/3970#discussion_r791312059



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos

Review comment:
       Why default to RANGE here? Is this what we recommend? I'd probably 
default to HASH because there will be few files. RANGE could take a single file 
worth of deletes and spread them across 200 delete files accidentally.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to