nsivabalan commented on code in PR #13229:
URL: https://github.com/apache/hudi/pull/13229#discussion_r2076683946
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/upgrade/UpgradeDowngradeUtils.java:
##########
@@ -91,7 +92,8 @@ public static void runCompaction(HoodieTable table,
HoodieEngineContext context,
try (BaseHoodieWriteClient writeClient =
upgradeDowngradeHelper.getWriteClient(compactionConfig, context)) {
Option<String> compactionInstantOpt =
writeClient.scheduleCompaction(Option.empty());
if (compactionInstantOpt.isPresent()) {
- writeClient.compact(compactionInstantOpt.get());
+ HoodieWriteMetadata result =
writeClient.compact(compactionInstantOpt.get());
+ writeClient.commitCompaction(compactionInstantOpt.get(), result,
Option.empty());
Review Comment:
this is what my thought process is.
inline Compaction is only meant for below use-case.
someone ingests to hudi table. and along the way, they have enabled inline
table services. So, after commit/delta commit completes, we call compact() and
wanted to complete the compaction right away. So, we are adding additional
argument just for this purpose.
For any other external callers, I wanted to maintain that only supported
flow is non-auto commit flow. If not, it might again cause confusion when to
use auto commit flow and when to use non-auto flow if we have both APIs for
compaction.
Let me know what do you think.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]