danny0405 commented on a change in pull request #2593:
URL: https://github.com/apache/hudi/pull/2593#discussion_r581689075



##########
File path: 
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/HoodieFlinkWriteClient.java
##########
@@ -208,12 +210,32 @@ public void bootstrap(Option<Map<String, String>> 
extraMetadata) {
 
   @Override
   public void commitCompaction(String compactionInstantTime, List<WriteStatus> 
writeStatuses, Option<Map<String, String>> extraMetadata) throws IOException {
-    throw new HoodieNotSupportedException("Compaction is not supported yet");
+    HoodieFlinkTable<T> table = HoodieFlinkTable.create(config, 
(HoodieFlinkEngineContext) context);
+    HoodieCommitMetadata metadata = 
FlinkCompactHelpers.newInstance().createCompactionMetadata(
+        table, compactionInstantTime, writeStatuses, config.getSchema());
+    extraMetadata.ifPresent(m -> m.forEach(metadata::addMetadata));
+    completeCompaction(metadata, writeStatuses, table, compactionInstantTime);
   }
 

Review comment:
       The `StreamWriteOperatorCoordinator.checkpointComplete` is the entry 
point to schedule a compaction, but because there is a pluggable strategy 
there, a schedule may generate a null compaction plan (E.G. no compaction), we 
say the compaction is async.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to