yihua commented on code in PR #8301:
URL: https://github.com/apache/hudi/pull/8301#discussion_r1187759698


##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/HiveSyncProcedure.scala:
##########
@@ -41,7 +41,8 @@ class HiveSyncProcedure extends BaseProcedure with 
ProcedureBuilder
     ProcedureParameter.optional(5, "mode", DataTypes.StringType, ""),
     ProcedureParameter.optional(6, "partition_fields", DataTypes.StringType, 
""),
     ProcedureParameter.optional(7, "partition_extractor_class", 
DataTypes.StringType, ""),
-    ProcedureParameter.optional(8, "strategy", DataTypes.StringType, "")
+    ProcedureParameter.optional(8, "strategy", DataTypes.StringType, ""),
+    ProcedureParameter.optional(9, "partition_fixmode", DataTypes.StringType, 
"")

Review Comment:
   Let's fix the naming here too.



##########
hudi-sync/hudi-sync-common/src/main/java/org/apache/hudi/sync/common/HoodieSyncConfig.java:
##########
@@ -163,6 +163,11 @@ public class HoodieSyncConfig extends HoodieConfig {
       .defaultValue("")
       .withDocumentation("The spark version used when syncing with a 
metastore.");
 
+  public static final ConfigProperty<String> META_SYNC_PARTITION_FIXMODE = 
ConfigProperty
+      .key("hoodie.datasource.hive_sync.partition_fixmode")
+      .defaultValue("false")
+      .withDocumentation("Implement a full partition sync operation when 
partitions are lost.");

Review Comment:
   If the `incremental` is the naming, default should be `true`, i.e., 
"lastCommitTimeSynced is used as the baseline to synchronize the newly 
generated partitions after lastCommitTimeSynced".



##########
hudi-sync/hudi-sync-common/src/main/java/org/apache/hudi/sync/common/HoodieSyncConfig.java:
##########
@@ -226,6 +231,8 @@ public static class HoodieSyncConfigParams {
     public Boolean isConditionalSync;
     @Parameter(names = {"--spark-version"}, description = "The spark version")
     public String sparkVersion;
+    @Parameter(names = {"--partition-fixmode"}, description = "Implement a 
full partition sync operation when partitions are lost.")

Review Comment:
   Same here for naming.



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/HiveSyncProcedure.scala:
##########
@@ -66,6 +67,7 @@ class HiveSyncProcedure extends BaseProcedure with 
ProcedureBuilder
     val partitionFields = getArgValueOrDefault(args, 
PARAMETERS(6)).get.asInstanceOf[String]
     val partitionExtractorClass = getArgValueOrDefault(args, 
PARAMETERS(7)).get.asInstanceOf[String]
     val strategy = getArgValueOrDefault(args, 
PARAMETERS(8)).get.asInstanceOf[String]
+    val partitionFixMode = getArgValueOrDefault(args, 
PARAMETERS(9)).get.asInstanceOf[String]

Review Comment:
   variable naming too.



##########
hudi-sync/hudi-sync-common/src/main/java/org/apache/hudi/sync/common/HoodieSyncConfig.java:
##########
@@ -163,6 +163,11 @@ public class HoodieSyncConfig extends HoodieConfig {
       .defaultValue("")
       .withDocumentation("The spark version used when syncing with a 
metastore.");
 
+  public static final ConfigProperty<String> META_SYNC_PARTITION_FIXMODE = 
ConfigProperty
+      .key("hoodie.datasource.hive_sync.partition_fixmode")
+      .defaultValue("false")
+      .withDocumentation("Implement a full partition sync operation when 
partitions are lost.");

Review Comment:
   Agree that `hoodie.datasource.hive_sync.incremental` is a better naming.  
Since this can also apply to Glue Catalog sync, can we name it with 
`meta.sync`, i.e., `hoodie.meta.sync.incremental`?  @xushiyan what is the 
naming convention?  I see different prefixes are used, e.g., 
`hoodie.meta.sync`, `hoodie.datasource.meta_sync`, `hoodie.meta_sync`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to