aokolnychyi commented on a change in pull request #3970:
URL: https://github.com/apache/iceberg/pull/3970#discussion_r791307279



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java
##########
@@ -244,14 +244,8 @@ public DistributionMode positionDeleteDistributionMode() {
         return deleteMode;
       }
     } else {
-      // use hash distribution if write distribution is range or hash and 
table is partitioned
-      // avoid range-based shuffles unless the user asks explicitly
-      DistributionMode writeMode = distributionMode();
-      if (writeMode != NONE && table.spec().isPartitioned()) {
-        return HASH;
-      } else {
-        return writeMode;
-      }
+      // avoid range-based shuffles for partitioned tables
+      return table.spec().isPartitioned() ? HASH : RANGE;

Review comment:
       My current thinking is that we should probably request some distribution 
by default. If the table is partitioned, we should cluster by spec ID and 
partition to avoid the range-based shuffle. In all other cases, it is probably 
better to ask for an ordered distribution by spec ID, partition, file and 
position to reduce the number of produced delete files. Having a lot of delete 
files will probably cost us more than doing some work during the DELETE 
operation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to