danny0405 commented on code in PR #13261:
URL: https://github.com/apache/hudi/pull/13261#discussion_r2094680531


##########
hudi-spark-datasource/hudi-spark-common/src/main/java/org/apache/hudi/commit/DatasetBucketRescaleCommitActionExecutor.java:
##########
@@ -41,17 +41,17 @@ public class DatasetBucketRescaleCommitActionExecutor 
extends DatasetBulkInsertO
   private static final long serialVersionUID = 1L;
 
   private static final Logger LOG = 
LoggerFactory.getLogger(DatasetBucketRescaleCommitActionExecutor.class);
-  private final PartitionBucketIndexHashingConfig hashingConfig;
+  private final String expression;
+  private final String rule;
+  private final int bucketNumber;
+  private PartitionBucketIndexHashingConfig hashingConfig;
 
   public DatasetBucketRescaleCommitActionExecutor(HoodieWriteConfig config,
-                                                  SparkRDDWriteClient 
writeClient,
-                                                  String instantTime) {
-    super(config, writeClient, instantTime);
-    String expression = config.getBucketIndexPartitionExpression();
-    String rule = config.getBucketIndexPartitionRuleType();
-    int bucketNumber = config.getBucketIndexNumBuckets();
-    this.hashingConfig = new PartitionBucketIndexHashingConfig(expression,
-        bucketNumber, rule, PartitionBucketIndexHashingConfig.CURRENT_VERSION, 
instantTime);
+                                                  SparkRDDWriteClient 
writeClient) {
+    super(config, writeClient);
+    expression = config.getBucketIndexPartitionExpression();
+    rule = config.getBucketIndexPartitionRuleType();
+    bucketNumber = config.getBucketIndexNumBuckets();

Review Comment:
   why the `hashingConfig` must be initialized in `preExecute`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to