fvaleye commented on code in PR #1620:
URL: https://github.com/apache/iceberg-rust/pull/1620#discussion_r2297880427


##########
crates/integrations/datafusion/src/physical_plan/repartition.rs:
##########
@@ -0,0 +1,906 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use std::any::Any;
+use std::sync::Arc;
+
+use datafusion::error::Result as DFResult;
+use datafusion::execution::{SendableRecordBatchStream, TaskContext};
+use datafusion::physical_expr::{EquivalenceProperties, PhysicalExpr};
+use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};
+use datafusion::physical_plan::expressions::Column;
+use datafusion::physical_plan::repartition::RepartitionExec;
+use datafusion::physical_plan::{
+    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, PlanProperties,
+};
+use iceberg::spec::{SchemaRef, TableMetadata, TableMetadataRef, Transform};
+
+/// Iceberg-specific repartition execution plan that optimizes data 
distribution
+/// for parallel processing while respecting Iceberg table partitioning 
semantics.
+///
+/// This execution plan automatically determines the optimal partitioning 
strategy based on
+/// the table's partition specification and the configured write distribution 
mode:
+///
+/// ## Partitioning Strategies
+///
+/// - **Unpartitioned tables**: Uses round-robin distribution to ensure 
balanced load
+///   across all workers, maximizing parallelism for write operations.
+///
+/// - **Partitioned tables**: Uses hash partitioning on partition columns 
(identity transforms)
+///   and bucket columns to maintain data co-location. This ensures:
+///   - Better file clustering within partitions
+///   - Improved query pruning performance
+///   - Optimal join performance on partitioned columns
+///
+/// - **Range-distributed tables**: Approximates range distribution by hashing 
on sort order
+///   columns since DataFusion lacks native range exchange. Falls back to 
partition/bucket
+///   column hashing when available.
+///
+/// ## Write Distribution Modes
+///
+/// Respects the table's `write.distribution-mode` property:
+/// - `hash` (default): Distributes by partition and bucket columns
+/// - `range`: Distributes by sort order columns
+/// - `none`: Uses round-robin distribution
+///
+/// ## Performance notes
+///
+/// - Only repartitions when the input partitioning scheme differs from the 
desired strategy
+/// - Only repartitions when the input partition count differs from the target
+/// - Automatically detects optimal partition count from DataFusion's 
SessionConfig
+/// - Preserves column order (partitions first, then buckets) for consistent 
file layout
+#[derive(Debug)]
+pub struct IcebergRepartitionExec {

Review Comment:
   Thank you for your comments!
   
   `RepartitionExec` is a generic operator: it reshuffles rows based on a 
distribution requirement.
   Iceberg has stricter requirements for how data must be partitioned, 
bucketed, sorted, and grouped before writing. So, we must select the relevant 
partition strategy.
   
   We have special requirements before writing:
   1. Use Iceberg table metadata:
   - Partition specifications (identity transforms, bucket transforms)
   - Sort orders
   - Write distribution mode (hash, range, none)
   2. Select the appropriate partitioning strategy:
   - Hash partitioning on partition/bucket columns for partitioned tables
   - Round-robin for unpartitioned tables
   - Range approximation using sort order columns
   
   Some other requirements to preserve the" partition–bucket–range ordering" 
semantics required by Iceberg:
   - Partition columns must be respected in the physical file layout
   - Bucketing/range partitioning needs to be reproducible and consistent
   - File grouping must align with Iceberg metadata expectations
   
   Repartitioning is a plan-level operator, not an expression:
   - `PhysicalExpr` can help compute the partition/bucket key for a row.
   - Reshuffling rows into partitions is still an execution node 
(ExecutionPlan).
   - If we only extend `PhysicalExpr`, we'll have an expression that can 
calculate partition/bucket values, but we still need an Exec node to do the 
actual shuffle/repartitioning.
   
   So, in a nutshell, why we need our "Iceberg-aware" strategy  
(`IcebergRepartitionExec`) to determine the best partitioning, and we use it 
for Datafusion (calling `RepartitionExec` with our selection), and we use 
`PhysicalExpr` for determining it: 
   
   ```
   IcebergRepartitionExec (strategy selection, Iceberg-aware)
       ↳ chooses partitioning (hash/round-robin/range)
       ↳ uses Iceberg metadata (partition spec, sort order, mode)
       ↓
   DataFusion RepartitionExec (generic shuffle operator)
       ↳ actually reshuffles rows into partitions
       ↓
   PhysicalExpr (partition/bucket key computation)
       ↳ hash/range/bucket expressions evaluated per row
   ```
   
   Of course, if we decide to rely 100% on DataFusion, we need to consider:
   - `RepartitionExec` implements generic distributions without understanding 
the Iceberg specificities (bucket, partitions, range vs sort)
   - Iceberg requires that bucketing and range partitioning be reproducible and 
consistent across writers
   - Iceberg expects hierarchical ordering: partition → bucket → range
   - Data  Inconsistency risk? may not be reproductible?
   - If Iceberg semantics aren’t enforced at write time, we will need extra 
cleanup/repair jobs later (e.g., repartitioning files offline or rewriting 
manifests for metadata) or custom implementation
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to