YuweiXiao commented on code in PR #6737:
URL: https://github.com/apache/hudi/pull/6737#discussion_r990981404
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/utils/Pipelines.java:
##########
@@ -316,9 +318,8 @@ public static DataStream<HoodieRecord>
rowDataToHoodieRecord(Configuration conf,
public static DataStream<Object> hoodieStreamWrite(Configuration conf,
DataStream<HoodieRecord> dataStream) {
if (OptionsResolver.isBucketIndexType(conf)) {
WriteOperatorFactory<HoodieRecord> operatorFactory =
BucketStreamWriteOperator.getFactory(conf);
- int bucketNum = conf.getInteger(FlinkOptions.BUCKET_INDEX_NUM_BUCKETS);
- String indexKeyFields = conf.getString(FlinkOptions.INDEX_KEY_FIELD);
- BucketIndexPartitioner<HoodieKey> partitioner = new
BucketIndexPartitioner<>(bucketNum, indexKeyFields);
+ dataStream = addBucketBootstrapIfNecessary(conf, dataStream);
+ Partitioner<HoodieKey> partitioner =
BucketIndexPartitioner.instance(conf);
Review Comment:
We use the sub-pipeline to have a singleton to handle concurrent metadata
initialization. Putting the initialization to StreamWriteFunction would
requires the StreamCoordinator to handle special events (other than the current
`WriteMetadataEvent`).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]