codope commented on a change in pull request #5027:
URL: https://github.com/apache/hudi/pull/5027#discussion_r826089521



##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/SparkSortAndSizeExecutionStrategy.java
##########
@@ -58,12 +57,14 @@ public SparkSortAndSizeExecutionStrategy(HoodieTable table,
                                                               final String 
instantTime, final Map<String, String> strategyParams, final Schema schema,
                                                               final 
List<HoodieFileGroupId> fileGroupIdList, final boolean preserveHoodieMetadata) {
     LOG.info("Starting clustering for a group, parallelism:" + numOutputGroups 
+ " commit:" + instantTime);
-    Properties props = getWriteConfig().getProps();
-    props.put(HoodieWriteConfig.BULKINSERT_PARALLELISM_VALUE.key(), 
String.valueOf(numOutputGroups));
+
     // We are calling another action executor - disable auto commit. Strategy 
is only expected to write data in new files.
-    props.put(HoodieWriteConfig.AUTO_COMMIT_ENABLE.key(), 
Boolean.FALSE.toString());
-    props.put(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE.key(), 
String.valueOf(getWriteConfig().getClusteringTargetFileMaxBytes()));
-    HoodieWriteConfig newConfig = 
HoodieWriteConfig.newBuilder().withProps(props).build();
+    getWriteConfig().setValue(HoodieWriteConfig.AUTO_COMMIT_ENABLE, 
Boolean.FALSE.toString());

Review comment:
       Isn't it possible this could cause the same issue, because 
`getWriteConfig()` is returning the same object? Maybe fix in superclass as 
well?

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/SparkSortAndSizeExecutionStrategy.java
##########
@@ -58,12 +57,14 @@ public SparkSortAndSizeExecutionStrategy(HoodieTable table,
                                                               final String 
instantTime, final Map<String, String> strategyParams, final Schema schema,
                                                               final 
List<HoodieFileGroupId> fileGroupIdList, final boolean preserveHoodieMetadata) {
     LOG.info("Starting clustering for a group, parallelism:" + numOutputGroups 
+ " commit:" + instantTime);
-    Properties props = getWriteConfig().getProps();
-    props.put(HoodieWriteConfig.BULKINSERT_PARALLELISM_VALUE.key(), 
String.valueOf(numOutputGroups));
+
     // We are calling another action executor - disable auto commit. Strategy 
is only expected to write data in new files.
-    props.put(HoodieWriteConfig.AUTO_COMMIT_ENABLE.key(), 
Boolean.FALSE.toString());
-    props.put(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE.key(), 
String.valueOf(getWriteConfig().getClusteringTargetFileMaxBytes()));
-    HoodieWriteConfig newConfig = 
HoodieWriteConfig.newBuilder().withProps(props).build();
+    getWriteConfig().setValue(HoodieWriteConfig.AUTO_COMMIT_ENABLE, 
Boolean.FALSE.toString());
+
+    HoodieWriteConfig newConfig = HoodieWriteConfig.newBuilder()
+            .withBulkInsertParallelism(numOutputGroups)
+            .withProps(getWriteConfig().getProps()).build();
+    newConfig.setValue(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE, 
String.valueOf(getWriteConfig().getClusteringTargetFileMaxBytes()));
     return (JavaRDD<WriteStatus>) SparkBulkInsertHelper.newInstance()
         .bulkInsert(inputRecords, instantTime, getHoodieTable(), newConfig, 
false, getPartitioner(strategyParams, schema), true, numOutputGroups, new 
CreateHandleFactory(preserveHoodieMetadata));

Review comment:
       What about `getHoodieTable()`? That also contains `HoodieWriteConfig`, 
though I don't see it being mutated. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to