boneanxs commented on code in PR #1834:
URL: https://github.com/apache/hudi/pull/1834#discussion_r907976925
##########
hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java:
##########
@@ -225,35 +220,35 @@ public static void
checkRequiredProperties(TypedProperties props, List<String> c
});
}
- public static HoodieWriteClient createHoodieClient(JavaSparkContext jssc,
String schemaStr, String basePath,
+ public static HoodieWriteConfig createHoodieConfig(String schemaStr, String
basePath,
String tblName, Map<String, String> parameters) {
- boolean asyncCompact =
Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.ASYNC_COMPACT_ENABLE_KEY()));
- // inline compaction is on by default for MOR
+ boolean asyncCompact =
Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.ASYNC_COMPACT_ENABLE_OPT_KEY()));
boolean inlineCompact = !asyncCompact &&
parameters.get(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY())
.equals(DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL());
- return createHoodieClient(jssc, schemaStr, basePath, tblName, parameters,
inlineCompact);
- }
-
- public static HoodieWriteClient createHoodieClient(JavaSparkContext jssc,
String schemaStr, String basePath,
- String tblName, Map<String, String> parameters, boolean inlineCompact) {
-
// insert/bulk-insert combining to be true, if filtering for duplicates
boolean combineInserts =
Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.INSERT_DROP_DUPS_OPT_KEY()));
+ HoodieWriteConfig.Builder builder = HoodieWriteConfig.newBuilder()
+ .withPath(basePath).withAutoCommit(false).combineInput(combineInserts,
true);
Review Comment:
Hi, @nsivabalan Do you got any idea why we disable AutoCommit by default
when creating HoodieWriteConfig?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]