leesf commented on a change in pull request #3426:
URL: https://github.com/apache/hudi/pull/3426#discussion_r688546421



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java
##########
@@ -110,23 +119,31 @@ protected HoodieBackedTableMetadataWriter(Configuration 
hadoopConf, HoodieWriteC
       
ValidationUtils.checkArgument(!this.metadataWriteConfig.useFileListingMetadata(),
 "File listing cannot be used for Metadata Table");
 
       initRegistry();
-      HoodieTableMetaClient datasetMetaClient = 
HoodieTableMetaClient.builder().setConf(hadoopConf).setBasePath(datasetWriteConfig.getBasePath()).build();
-      initialize(engineContext, datasetMetaClient);
-      if (enabled) {
-        // This is always called even in case the table was created for the 
first time. This is because
-        // initFromFilesystem() does file listing and hence may take a long 
time during which some new updates
-        // may have occurred on the table. Hence, calling this always ensures 
that the metadata is brought in sync
-        // with the active timeline.
-        HoodieTimer timer = new HoodieTimer().startTimer();
-        syncFromInstants(datasetMetaClient);
-        metrics.ifPresent(m -> m.updateMetrics(HoodieMetadataMetrics.SYNC_STR, 
timer.endTimer()));
-      }
+      this.datasetMetaClient = 
HoodieTableMetaClient.builder().setConf(hadoopConf).setBasePath(datasetWriteConfig.getBasePath()).build();
+      initTransactionManager();
+      initialize(engineContext);
+      initTableMetadata();
     } else {
       enabled = false;
       this.metrics = Option.empty();
     }
   }
 
+  /**
+   * Initialize the {@code TransactionManager} to use for metadata table.
+   *
+   * In HUDI multi writer mode, each operation will sync to metadata table 
before completion. Metadata table has common
+   * base and log files to update for each operation. So we can only support 
serialized operations.
+   */
+  private void initTransactionManager() {
+    // The lock location should be different from the dataset
+    Properties properties = new Properties();
+    properties.putAll(datasetWriteConfig.getProps());
+    properties.setProperty(FILESYSTEM_LOCK_PATH_PROP_KEY, 
properties.getProperty(FILESYSTEM_LOCK_PATH_PROP_KEY, 
datasetWriteConfig.getBasePath() + "/.hoodie/.locks") + "/metadata");

Review comment:
       make the '/.hoodie/.locks' constant?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to