nsivabalan commented on code in PR #8900:
URL: https://github.com/apache/hudi/pull/8900#discussion_r1223854254


##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieClientOnMergeOnReadStorage.java:
##########
@@ -314,7 +314,7 @@ public void 
testSchedulingCompactionAfterSchedulingLogCompaction() throws Except
 
     // Try scheduling compaction, it wont succeed
     Option<String> compactionTimeStamp = 
client.scheduleCompaction(Option.empty());
-    assertFalse(compactionTimeStamp.isPresent());

Review Comment:
   do we know the reason why we had to flip. 



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataWriteUtils.java:
##########
@@ -111,6 +112,10 @@ public static HoodieWriteConfig createMetadataWriteConfig(
             // deltacommits having corresponding completed commits. Therefore, 
we need to compact all fileslices of all
             // partitions together requiring UnBoundedCompactionStrategy.
             .withCompactionStrategy(new UnBoundedCompactionStrategy())
+            // Check if log compaction is enabled, this is needed for tables 
with lot of records.
+            .withLogCompactionEnabled(writeConfig.isLogCompactionEnabled())
+            // This config is only used if enableLogCompactionForMetadata is 
set.

Review Comment:
   not sure I get your comment here "This config is only used if 
enableLogCompactionForMetadata is set". from the code, it looks like we fetch 
from  writeConfig.isLogCompactionEnabled().



##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestStreamWriteOperatorCoordinator.java:
##########
@@ -233,48 +233,49 @@ void testSyncMetadataTable() throws Exception {
     assertThat(completedTimeline.lastInstant().get().getTimestamp(), 
startsWith(HoodieTableMetadata.SOLO_COMMIT_TIMESTAMP));
 
     // test metadata table compaction
-    // write another 4 commits
-    for (int i = 1; i < 5; i++) {
+    // write another 9 commits to trigger compaction twice. Since default 
clean version to retain is 2.

Review Comment:
   @danny0405 : can you review changes in flink classes.



##########
hudi-spark-datasource/hudi-spark/src/main/java/org/apache/hudi/cli/ArchiveExecutorUtils.java:
##########
@@ -57,6 +57,15 @@ public static int archive(JavaSparkContext jsc,
         .build();
     HoodieEngineContext context = new HoodieSparkEngineContext(jsc);
     HoodieSparkTable<HoodieAvroPayload> table = 
HoodieSparkTable.create(config, context);
+
+    // Check if the metadata is already initialized. If it is initialize 
ignore the input arguments enableMetadata.

Review Comment:
   are these required ? 



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java:
##########
@@ -1021,17 +1023,46 @@ private void 
runPendingTableServicesOperations(BaseHoodieWriteClient writeClient
    * deltacommit.
    */
   protected void compactIfNecessary(BaseHoodieWriteClient writeClient, String 
latestDeltacommitTime) {
+
+    // Check if there are any pending compaction or log compaction instants in 
the timeline.
+    // If pending compact/logcompaction operations are found abort scheduling 
new compaction/logcompaction operations.
+    Option<HoodieInstant> pendingLogCompactionInstant =
+        
metadataMetaClient.getActiveTimeline().filterPendingLogCompactionTimeline().firstInstant();

Review Comment:
   do we have tests for these? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to