zhangyue19921010 commented on code in PR #13017:
URL: https://github.com/apache/hudi/pull/13017#discussion_r2014114562


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/execution/bulkinsert/RDDSimpleBucketBulkInsertPartitioner.java:
##########
@@ -75,11 +81,11 @@ Map<String, Map<Integer, String>> 
getPartitionMapper(JavaRDD<HoodieRecord<T>> re
                                                        Map<String, Integer> 
fileIdPrefixToBucketIndex) {
 
     HoodieSimpleBucketIndex index = (HoodieSimpleBucketIndex) table.getIndex();

Review Comment:
   Type case here still make sense.
   
   Currently, Bucket Indexes are categorized into Simple Bucket Index and 
Consistent Hashing Index. In my opinion, the Partition-Level Bucket Index 
essentially splits the traditional table-level bucket allocation into 
partition-level granularity, while retaining all other computational logic as 
'Simple'. Therefore, it largely reuses the capabilities of the Simple Bucket 
Index, so that `type case here still make sense`



##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/index/TestHoodieIndexConfigs.java:
##########
@@ -26,7 +26,7 @@
 import org.apache.hudi.index.HoodieIndex.IndexType;
 import org.apache.hudi.index.bloom.HoodieBloomIndex;
 import org.apache.hudi.index.bloom.HoodieGlobalBloomIndex;
-import org.apache.hudi.index.bucket.HoodieSimpleBucketIndex;
+import org.apache.hudi.index.bucket.partition.HoodieSimpleBucketIndex;
 import org.apache.hudi.index.bucket.HoodieSparkConsistentBucketIndex;

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to