danny0405 commented on a change in pull request #3741:
URL: https://github.com/apache/hudi/pull/3741#discussion_r732440053
##########
File path:
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/table/HoodieFlinkMergeOnReadTable.java
##########
@@ -97,15 +98,20 @@
HoodieEngineContext context,
String instantTime,
Option<Map<String, String>> extraMetadata) {
- BaseScheduleCompactionActionExecutor scheduleCompactionExecutor = new
FlinkScheduleCompactionActionExecutor(
- context, config, this, instantTime, extraMetadata);
+ ScheduleCompactionActionExecutor scheduleCompactionExecutor = new
ScheduleCompactionActionExecutor(
+ context, config, this, instantTime, extraMetadata,
+ new HoodieFlinkMergeOnReadTableCompactor());
return scheduleCompactionExecutor.execute();
}
@Override
- public HoodieWriteMetadata<List<WriteStatus>> compact(HoodieEngineContext
context, String compactionInstantTime) {
- throw new HoodieNotSupportedException("Compaction is supported as a
separate pipeline, "
- + "should not invoke directly through HoodieFlinkMergeOnReadTable");
+ public HoodieWriteMetadata<List<WriteStatus>> compact(
+ HoodieEngineContext context, String compactionInstantTime,
AbstractHoodieWriteClient writeClient) {
+ RunCompactionActionExecutor compactionExecutor = new
RunCompactionActionExecutor(
Review comment:
No, flink execute data compaction for a separate pipeline, this is only
used for metadata table compaction.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]