aokolnychyi commented on a change in pull request #3970:
URL: https://github.com/apache/iceberg/pull/3970#discussion_r791305876



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkDistributionAndOrderingUtil.java
##########
@@ -163,7 +163,7 @@ private static Distribution 
positionDeleteDistribution(DistributionMode distribu
         return Distributions.clustered(clustering);
 
       case RANGE:
-        SortOrder[] ordering = new SortOrder[]{SPEC_ID_ORDER, PARTITION_ORDER, 
FILE_PATH_ORDER};
+        SortOrder[] ordering = new SortOrder[]{SPEC_ID_ORDER, PARTITION_ORDER, 
FILE_PATH_ORDER, ROW_POSITION_ORDER};

Review comment:
       I think including the row position may allow us to avoid straggler tasks 
when a particular file has most of deletes.

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java
##########
@@ -244,14 +244,8 @@ public DistributionMode positionDeleteDistributionMode() {
         return deleteMode;
       }
     } else {
-      // use hash distribution if write distribution is range or hash and 
table is partitioned
-      // avoid range-based shuffles unless the user asks explicitly
-      DistributionMode writeMode = distributionMode();
-      if (writeMode != NONE && table.spec().isPartitioned()) {
-        return HASH;
-      } else {
-        return writeMode;
-      }
+      // avoid range-based shuffles for partitioned tables
+      return table.spec().isPartitioned() ? HASH : RANGE;

Review comment:
       My current thinking is that we should probably request some distribution 
by default. If the table is partitioned, we should cluster by spec ID and 
partition to avoid the range-based shuffle. In all other cases, it is probably 
better to ask for an ordered distribution by spec ID, partition, file and 
position to reduce the number of produced delete files. Having a lot of delete 
files will probably cost us more than doing some work during the DELETE 
operation.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED

Review comment:
       I think it does not matter whether the table is sorted or not for 
position deletes.

##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/TestSparkDistributionAndOrderingUtil.java
##########
@@ -1300,6 +1300,202 @@ public void 
testRangeCopyOnWriteMergePartitionedSortedTable() {
     checkCopyOnWriteDistributionAndOrdering(table, MERGE, 
expectedDistribution, expectedOrdering);
   }
 
+  // 
===================================================================================
+  // Distribution and ordering for merge-on-read DELETE operations with 
position deletes
+  // 
===================================================================================
+  //
+  // UNPARTITIONED
+  // -------------------------------------------------------------------------
+  // delete mode is NOT SET -> ORDER BY _spec_id, _partition, _file, _pos
+  // delete mode is NONE -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos
+  // delete mode is HASH -> unspecified distribution + LOCALLY ORDER BY 
_spec_id, _partition, _file, _pos

Review comment:
       We may allow `HASH` even if the current spec is unpartitioned but there 
are multiple specs.
   I am not sure, though. Thoughts?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to