beyond1920 commented on code in PR #8072:
URL: https://github.com/apache/hudi/pull/8072#discussion_r1129314940
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/SparkInsertOverwritePartitioner.java:
##########
@@ -41,6 +42,25 @@ public SparkInsertOverwritePartitioner(WorkloadProfile
profile, HoodieEngineCont
super(profile, context, table, config);
}
+ @Override
+ public BucketInfo getBucketInfo(int bucketNumber) {
+ BucketInfo bucketInfo = super.getBucketInfo(bucketNumber);
Review Comment:
Sorry, I don't understand what you mean here.
How to remove `SparkBucketIndexInsertOverwritePartitioner `? Most behaviors
in consistent hash partitioner and bucket index partitioner are very different
here.
Could you please explain it more or show me the code?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]