alexeykudinkin commented on code in PR #7402:
URL: https://github.com/apache/hudi/pull/7402#discussion_r1042582593
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/execution/bulkinsert/BulkInsertSortMode.java:
##########
@@ -22,7 +22,8 @@
* Bulk insert sort mode.
*/
public enum BulkInsertSortMode {
- NONE,
- GLOBAL_SORT,
- PARTITION_SORT
+ NONE,
+ GLOBAL_SORT,
+ PARTITION_SORT,
+ PARTITION_PATH_REDISTRIBUTE
Review Comment:
Redistribute while proper term isn't common in Spark glossary so might be
confusing to users.
What do you think about `PARTITION_COLUMN_REPARTITION`? It's a bit of a
tautology but relates directly to the repartitioning API Spark provides
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/execution/bulkinsert/PartitionPathRedistributePartitioner.java:
##########
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.execution.bulkinsert;
+
+import org.apache.hudi.common.function.SerializableFunctionUnchecked;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.model.HoodieRecordPayload;
+import org.apache.hudi.table.BulkInsertPartitioner;
+
+import org.apache.spark.Partitioner;
+import org.apache.spark.api.java.JavaRDD;
+
+import java.io.Serializable;
+import java.util.Objects;
+
+import scala.Tuple2;
+
+/**
+ * A built-in partitioner that does the following for input records for bulk
insert operation
+ * <p>
+ * - For physically partitioned table, repartition the input records based on
the partition path,
+ * limiting the shuffle parallelism to specified `outputSparkPartitions`
+ * <p>
+ * - For physically non-partitioned table, simply does coalesce for the input
records with
+ * `outputSparkPartitions`
+ * <p>
+ * Corresponding to the {@code BulkInsertSortMode.PARTITION_PATH_REDISTRIBUTE}
mode.
+ *
+ * @param <T> HoodieRecordPayload type
+ */
+public class PartitionPathRedistributePartitioner<T extends
HoodieRecordPayload>
+ implements BulkInsertPartitioner<JavaRDD<HoodieRecord<T>>> {
+
+ private final boolean isTablePartitioned;
+
+ public PartitionPathRedistributePartitioner(boolean isTablePartitioned) {
+ this.isTablePartitioned = isTablePartitioned;
+ }
+
+ @Override
+ public JavaRDD<HoodieRecord<T>> repartitionRecords(JavaRDD<HoodieRecord<T>>
records,
+ int
outputSparkPartitions) {
+ if (isTablePartitioned) {
+ PartitionPathRDDPartitioner partitioner = new
PartitionPathRDDPartitioner(
+ (partitionPath) -> (String) partitionPath, outputSparkPartitions);
+ return records.mapToPair(record -> new
Tuple2<>(record.getPartitionPath(), record))
Review Comment:
We can avoid this mapping by just providing the mapper to partition-path
directly from the record
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/execution/bulkinsert/PartitionPathRedistributePartitionerWithRows.java:
##########
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.execution.bulkinsert;
+
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.table.BulkInsertPartitioner;
+
+import org.apache.spark.sql.Column;
+import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Row;
+
+/**
+ * A built-in partitioner that does the following for input rows for bulk
insert operation
+ * <p>
+ * - For physically partitioned table, repartition the input rows based on the
partition path,
+ * limiting the shuffle parallelism to specified `outputSparkPartitions`
+ * <p>
+ * - For physically non-partitioned table, simply does coalesce for the input
rows with
+ * `outputSparkPartitions`
+ * <p>
+ * Corresponding to the {@code BulkInsertSortMode.PARTITION_PATH_REDISTRIBUTE}
mode.
+ */
+public class PartitionPathRedistributePartitionerWithRows implements
BulkInsertPartitioner<Dataset<Row>> {
+
+ private final boolean isTablePartitioned;
+
+ public PartitionPathRedistributePartitionerWithRows(boolean
isTablePartitioned) {
+ this.isTablePartitioned = isTablePartitioned;
+ }
+
+ @Override
+ public Dataset<Row> repartitionRecords(Dataset<Row> rows, int
outputSparkPartitions) {
+ if (isTablePartitioned) {
+ return rows.repartition(outputSparkPartitions, new
Column(HoodieRecord.PARTITION_PATH_METADATA_FIELD));
Review Comment:
Let's add a check validating meta-fields are enabled. What about virtual
keys support?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]