kbuci commented on code in PR #18305:
URL: https://github.com/apache/hudi/pull/18305#discussion_r2920628288


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieTableServiceClient.java:
##########
@@ -949,13 +976,13 @@ protected void archive(HoodieTable table) {
   }
 
   /**
-   * Get inflight timeline excluding compaction and clustering.
+   * Get inflight timeline excluding compaction, log compaction, and 
clustering.
    *
    * @param metaClient
    * @return
    */
   private HoodieTimeline 
getInflightTimelineExcludeCompactionAndClustering(HoodieTableMetaClient 
metaClient) {
-    HoodieTimeline inflightTimelineExcludingCompaction = 
metaClient.getCommitsTimeline().filterPendingExcludingCompaction();
+    HoodieTimeline inflightTimelineExcludingCompaction = 
metaClient.getCommitsTimeline().filterPendingExcludingCompactionAndLogCompaction();

Review Comment:
   Hypothetically, since logcompaction is somewhat mutable (if a plan fails 
after going to inflight it can only be rolled back completely), we could filter 
for logcompact inlfights, "claim" the heartbeat, and then rollback. But I 
didn't add this functionality in this diff for now since
   - I wasn't sure if it was always safe to assume that the caller of this 
function isn't already holding a table lock (and afaik the HUDI table lock 
isn't "recursive")
   - There would be duplicated heartbeat handling logic in the rollback failed 
writes flow and logcompact API
   - Since there don't seem to be any UTs that test the rollback of failed 
writes rolls back logcompaction instants, I wasn't sure if users of OSS HUDI 
actually rely on this current beheavior (of clean/rollback failed writes 
rolling back any failed logcompact attempts that were never re-executed).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to