nsivabalan commented on code in PR #9467:
URL: https://github.com/apache/hudi/pull/9467#discussion_r1298605259


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/upgrade/SixToFiveDowngradeHandler.java:
##########
@@ -39,20 +47,26 @@
 
 import static 
org.apache.hudi.common.table.HoodieTableConfig.TABLE_METADATA_PARTITIONS;
 import static 
org.apache.hudi.common.table.HoodieTableConfig.TABLE_METADATA_PARTITIONS_INFLIGHT;
-import static 
org.apache.hudi.metadata.HoodieTableMetadataUtil.deleteMetadataTablePartition;
 
 /**
  * Downgrade handle to assist in downgrading hoodie table from version 6 to 5.
  * To ensure compatibility, we need recreate the compaction requested file to
  * .aux folder.
+ * Since version 6 includes a new schema field for metadata table(MDT),
+ * the MDT needs to be deleted during downgrade to avoid column drop error.
+ * Also log block version was upgraded in version 6, therefore full compaction 
needs
+ * to be completed during downgrade to avoid write failures.
  */
 public class SixToFiveDowngradeHandler implements DowngradeHandler {
 
   @Override
   public Map<ConfigProperty, String> downgrade(HoodieWriteConfig config, 
HoodieEngineContext context, String instantTime, SupportsUpgradeDowngrade 
upgradeDowngradeHelper) {
     final HoodieTable table = upgradeDowngradeHelper.getTable(config, context);
 
-    removeRecordIndexIfNeeded(table, context);
+    // Since version 6 includes a new schema field for metadata table(MDT), 
the MDT needs to be deleted during downgrade to avoid column drop error.
+    HoodieTableMetadataUtil.deleteMetadataTable(config.getBasePath(), context);
+    runCompaction(table, context, config, upgradeDowngradeHelper);

Review Comment:
   java docs as to why we do compaction



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/upgrade/SixToFiveDowngradeHandler.java:
##########
@@ -39,20 +47,26 @@
 
 import static 
org.apache.hudi.common.table.HoodieTableConfig.TABLE_METADATA_PARTITIONS;
 import static 
org.apache.hudi.common.table.HoodieTableConfig.TABLE_METADATA_PARTITIONS_INFLIGHT;
-import static 
org.apache.hudi.metadata.HoodieTableMetadataUtil.deleteMetadataTablePartition;
 
 /**
  * Downgrade handle to assist in downgrading hoodie table from version 6 to 5.
  * To ensure compatibility, we need recreate the compaction requested file to
  * .aux folder.
+ * Since version 6 includes a new schema field for metadata table(MDT),
+ * the MDT needs to be deleted during downgrade to avoid column drop error.
+ * Also log block version was upgraded in version 6, therefore full compaction 
needs
+ * to be completed during downgrade to avoid write failures.

Review Comment:
   minor: ..... to avoid both read and future compaction failures. 



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/upgrade/SixToFiveDowngradeHandler.java:
##########
@@ -65,13 +79,31 @@ public Map<ConfigProperty, String> 
downgrade(HoodieWriteConfig config, HoodieEng
   }
 
   /**
-   * Record-level index, a new partition in metadata table, was first added in
-   * 0.14.0 ({@link HoodieTableVersion#SIX}. Any downgrade from this version
-   * should remove this partition.
+   * Utility method to run compaction for MOR table as part of downgrade step.
    */
-  private static void removeRecordIndexIfNeeded(HoodieTable table, 
HoodieEngineContext context) {
-    HoodieTableMetaClient metaClient = table.getMetaClient();
-    deleteMetadataTablePartition(metaClient, context, 
MetadataPartitionType.RECORD_INDEX, false);
+  private void runCompaction(HoodieTable table, HoodieEngineContext context, 
HoodieWriteConfig config,
+                             SupportsUpgradeDowngrade upgradeDowngradeHelper) {
+    try {
+      if (table.getMetaClient().getTableType() == 
HoodieTableType.MERGE_ON_READ) {
+        // The log block version has been upgraded in version six so 
compaction is required for downgrade.
+        // set required configs for scheduling compaction.
+        
HoodieInstantTimeGenerator.setCommitTimeZone(table.getMetaClient().getTableConfig().getTimelineTimezone());
+        HoodieWriteConfig compactionConfig = 
HoodieWriteConfig.newBuilder().withProps(config.getProps()).build();
+        compactionConfig.setValue(HoodieCompactionConfig.INLINE_COMPACT.key(), 
"true");
+        
compactionConfig.setValue(HoodieCompactionConfig.INLINE_COMPACT_NUM_DELTA_COMMITS.key(),
 "1");
+        
compactionConfig.setValue(HoodieCompactionConfig.INLINE_COMPACT_TRIGGER_STRATEGY.key(),
 CompactionTriggerStrategy.NUM_COMMITS.name());
+        
compactionConfig.setValue(HoodieCompactionConfig.COMPACTION_STRATEGY.key(), 
UnBoundedCompactionStrategy.class.getName());
+        compactionConfig.setValue(HoodieMetadataConfig.ENABLE.key(), "false");
+        EmbeddedTimelineServerHelper.createEmbeddedTimelineService(context, 
config);

Review Comment:
   why create timeline server outside. It will create one internally ? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to