rdblue commented on a change in pull request #3970:
URL: https://github.com/apache/iceberg/pull/3970#discussion_r791311579



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED

Review comment:
       Agreed. Position deletes require their own sort order by _file and _pos 
that we will always use.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos

Review comment:
       This should be fine since there is no partition, but you technically 
don't need _spec_id and _partition.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos

Review comment:
       Why default to RANGE here? Is this what we recommend? I'd probably 
default to HASH because there will be few files. RANGE could take a single file 
worth of deletes and spread them across 200 delete files accidentally.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos

Review comment:
       Oh, I see. Other specs are why we need to include _spec_id and 
_partition. Good call.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos

Review comment:
       This looks good to me.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos

Review comment:
       Actually, no. I think I'd cluster by _spec_id and _partition here. That 
minimizes the number of files. I'd do this as the default (NONE) and when mode 
is HASH.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is RANGE -> ORDER BY _spec_id, _partition, _file, _pos
+  //
+  // PARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> CLUSTER BY _spec_id, _partition + LOCALLY ORDER 
BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> CLUSTER BY _spec_id, _partition + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is RANGE -> ORDER BY _spec_id, _partition, _file, _pos

Review comment:
       I think that this set should be what we use for both UNPARTITIONED and 
PARTITIONED cases. We are writing deletes for the existing file layout, not the 
current partition spec. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to