boneanxs commented on a change in pull request #5027:
URL: https://github.com/apache/hudi/pull/5027#discussion_r826520167
##########
File path:
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/SparkSortAndSizeExecutionStrategy.java
##########
@@ -58,12 +57,14 @@ public SparkSortAndSizeExecutionStrategy(HoodieTable table,
final String
instantTime, final Map<String, String> strategyParams, final Schema schema,
final
List<HoodieFileGroupId> fileGroupIdList, final boolean preserveHoodieMetadata) {
LOG.info("Starting clustering for a group, parallelism:" + numOutputGroups
+ " commit:" + instantTime);
- Properties props = getWriteConfig().getProps();
- props.put(HoodieWriteConfig.BULKINSERT_PARALLELISM_VALUE.key(),
String.valueOf(numOutputGroups));
+
// We are calling another action executor - disable auto commit. Strategy
is only expected to write data in new files.
- props.put(HoodieWriteConfig.AUTO_COMMIT_ENABLE.key(),
Boolean.FALSE.toString());
- props.put(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE.key(),
String.valueOf(getWriteConfig().getClusteringTargetFileMaxBytes()));
- HoodieWriteConfig newConfig =
HoodieWriteConfig.newBuilder().withProps(props).build();
+ getWriteConfig().setValue(HoodieWriteConfig.AUTO_COMMIT_ENABLE,
Boolean.FALSE.toString());
Review comment:
As TypedProperties is thread-safe now, so changing this here should not
cause the same issue.
Please correct me if I'm wrong, I thought the purpose of this pr is avoiding
to set bulk related configures(bulkInsertParallelism, parquet max file size) to
the original writeConfig(`getWriteConfig`).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]