danny0405 commented on code in PR #8900:
URL: https://github.com/apache/hudi/pull/8900#discussion_r1223977198


##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestStreamWriteOperatorCoordinator.java:
##########
@@ -233,48 +233,49 @@ void testSyncMetadataTable() throws Exception {
     assertThat(completedTimeline.lastInstant().get().getTimestamp(), 
startsWith(HoodieTableMetadata.SOLO_COMMIT_TIMESTAMP));
 
     // test metadata table compaction
-    // write another 4 commits
-    for (int i = 1; i < 5; i++) {
+    // write another 9 commits to trigger compaction twice. Since default 
clean version to retain is 2.

Review Comment:
   > The reason for making the change is to support restore
   
   First of all, I'm confused why this change is related with restore ? The 
change is for MDT log compaction right? Can we address the restore issue in 
another PR ?
   
   > then next cleaner job after compaction would remove previous file slice 
there by blocking restore on metadata table or loosing data.
   
   Does the new cleaning strategy solve the issue, even we keep at least   2 
versions for each file group, it does not ensure we can restore to a long 
history commit. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to