danny0405 commented on code in PR #7056:
URL: https://github.com/apache/hudi/pull/7056#discussion_r1009099027
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/catalog/HoodieHiveCatalog.java:
##########
@@ -799,7 +800,7 @@ public void dropPartition(
try (HoodieFlinkWriteClient<?> writeClient = createWriteClient(tablePath,
table)) {
boolean hiveStylePartitioning =
Boolean.parseBoolean(table.getOptions().get(FlinkOptions.HIVE_STYLE_PARTITIONING.key()));
writeClient.deletePartitions(
-
Collections.singletonList(HoodieCatalogUtil.inferPartitionPath(hiveStylePartitioning,
partitionSpec)),
+
Collections.singletonList(HoodieCatalogUtil.inferPartitionPath(hiveStylePartitioning,
partitionSpec)),
HoodieActiveTimeline.createNewInstantTime())
Review Comment:
Unnecessary change.
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/catalog/HoodieHiveCatalog.java:
##########
@@ -546,7 +546,8 @@ private Table instantiateHiveTable(ObjectPath tablePath,
CatalogBaseTable table,
// because since Hive 3.x, there is validation when altering table,
// when the metadata fields are synced through the hive sync tool,
// a compatability issue would be reported.
- List<FieldSchema> allColumns =
HiveSchemaUtils.toHiveFieldSchema(table.getSchema());
+ boolean withOperationField =
Configuration.fromMap(table.getOptions()).getBoolean(FlinkOptions.CHANGELOG_ENABLED);
+ List<FieldSchema> allColumns =
HiveSchemaUtils.toHiveFieldSchema(table.getSchema(), withOperationField);
Review Comment:
`Configuration.fromMap(table.getOptions())` is a heavy operation, we should
avoid that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]