cshuo commented on code in PR #17867:
URL: https://github.com/apache/hudi/pull/17867#discussion_r2693271238


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java:
##########
@@ -637,6 +637,16 @@ public class FlinkOptions extends HoodieConfig {
       .noDefaultValue()
       .withDescription("Parallelism of tasks that do bucket assign, default 
same as the write task parallelism");
 
+  @AdvancedConfig
+  public static final ConfigOption<Integer> BUCKET_ASSIGN_MINIBATCH_SIZE = 
ConfigOptions
+      .key("write.bucket_assign.minibatch.size")

Review Comment:
   if we do not wanna contains `bucket_assign` in the option, maybe we should 
put this option in index options part? Like `index.rli.lookup.minibatch.size`



##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/partitioner/RecordIndexPartitioner.java:
##########
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.sink.partitioner;
+
+import org.apache.hudi.client.common.HoodieFlinkEngineContext;
+import org.apache.hudi.common.model.HoodieKey;
+import org.apache.hudi.common.table.HoodieTableMetaClient;
+import org.apache.hudi.configuration.FlinkOptions;
+import org.apache.hudi.metadata.HoodieTableMetadata;
+import org.apache.hudi.metadata.HoodieTableMetadataUtil;
+import org.apache.hudi.metadata.MetadataPartitionType;
+import org.apache.hudi.util.StreamerUtil;
+
+import org.apache.flink.api.common.functions.Partitioner;
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Record index input partitioner, which is aligned with the mapping of record 
key
+ * to the file group of record index partition in metadata table. It prevents 
multiple
+ * index write subtasks from writing the same record index file group, thereby 
effectively
+ * reducing the number of small files.
+ */
+public class RecordIndexPartitioner implements Partitioner<HoodieKey> {
+  private final Configuration conf;
+  /**
+   * The number of file groups for record index partition in metadata data 
table. The number
+   * cannot be calculated during compiling the writing pipeline, since the 
hoodie table may
+   * not be created yet, so the number is lazily calculated during job running.
+   */
+  private int numFileGroupsForRecordIndexPartition = -1;
+
+  public RecordIndexPartitioner(Configuration conf) {
+    this.conf = conf;
+  }
+
+  @Override
+  public int partition(HoodieKey recordKey, int numPartitions) {
+    // initialize numFileGroupsForRecordIndexPartition lazily.

Review Comment:
   yes, it'll be called when a task emit a record. Do you mean the calculating 
of `numFileGroupsForRecordIndexPartition` is kind of heavy if executed at task 
level?



##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/partitioner/MiniBatchBucketAssignOperator.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.sink.partitioner;
+
+import org.apache.hudi.client.model.HoodieFlinkInternalRow;
+
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.ProcessOperator;
+
+/**
+ * An operator that performs mini-batch bucket assignment for incoming records.
+ *
+ * <p>This operator wraps the {@link MinibatchBucketAssignFunction} and 
handles the assignment
+ * of buckets to records in mini-batches to improve performance when using RLI 
(Remote Lookup Index)
+ * or other index types. It buffers input records and processes them in 
batches to reduce the number
+ * of individual index lookups, which can significantly improve performance 
compared to processing
+ * each record individually.
+ *
+ * @see MinibatchBucketAssignFunction for the underlying bucket assignment 
logic
+ */
+public class MiniBatchBucketAssignOperator extends 
ProcessOperator<HoodieFlinkInternalRow, HoodieFlinkInternalRow> implements 
BoundedOneInput {
+
+  /** The underlying function that performs the actual bucket assignment 
logic. */
+  private final MinibatchBucketAssignFunction bucketAssignFunction;
+
+  /**
+   * Constructs a MiniBatchBucketAssignOperator with the specified bucket 
assignment function.
+   *
+   * @param bucketAssignFunction the function responsible for performing the 
bucket assignment logic
+   */
+  public MiniBatchBucketAssignOperator(MinibatchBucketAssignFunction 
bucketAssignFunction) {
+    super(bucketAssignFunction);
+    this.bucketAssignFunction = bucketAssignFunction;
+  }
+
+  /**
+   * Prepares for taking a snapshot of the operator state before a barrier 
arrives.
+   * This method ensures that any buffered records are processed before 
checkpointing
+   * to maintain consistency in the bucket assignment state.
+   *
+   * @param checkpointId the ID of the checkpoint to be taken
+   */
+  @Override
+  public void prepareSnapshotPreBarrier(long checkpointId) throws Exception {

Review Comment:
   `prepareSnapshotPreBarrier` is a method of `StreamOperator`, which is the 
basic interface of all stream operators. So when other operators need it, they 
can just override the method. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to