TheR1sing3un commented on code in PR #17779:
URL: https://github.com/apache/hudi/pull/17779#discussion_r2660658767


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/timeline/versioning/v2/TimelineArchiverV2.java:
##########
@@ -125,6 +123,10 @@ public int archiveIfRequired(HoodieEngineContext context, 
boolean acquireLock) t
       } else {
         log.info("No Instants to archive");
       }
+      // run compact and clean if needed even no instants were archived
+      if (!instantsToArchive.isEmpty() || config.isTimelineCompactionForced()) 
{

Review Comment:
   > Wondering wheter we could introduce some lazy compaction strategies or add 
a upper threshold for the target file size of single round of compaction.
   
   I agree with this idea.
   The current strategy is relatively simple, and in some corner cases, 
compaction will be blocked, resulting in all newly added archived files not 
being compacted.
   Therefore, I will propose two pr later:
   1. Solve the current corner case that block normal compaction
   2. Introduce more diverse compaction strategies, not only for trigger timing 
(lazy/eager), but also for compaction strategies, such as finding at most 
**one** batch of candicates at each level **or** compact each level **as much 
as** possible, etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to