wuchong commented on a change in pull request #17988:
URL: https://github.com/apache/flink/pull/17988#discussion_r773947555
##########
File path:
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveOptions.java
##########
@@ -51,4 +51,11 @@
.withDescription(
"If it is false, using flink native writer to
write parquet and orc files; "
+ "If it is true, using hadoop mapred
record writer to write parquet and orc files.");
+
+ public static final ConfigOption<Integer>
TABLE_EXEC_HIVE_PARTITION_SPLIT_THREAD_NUM =
+ key("table.exec.hive.partition-split.thread.num")
Review comment:
Sorry @Myracle, I have one more question. Is this configuration just
like Hive's `hive.load.dynamic.partitions.thread`? I think maybe we can call
the configuration `table.exec.hive.load-partition-splits.thread-num` to be
closer to the actual behavior, what do you think?
Besides, could you also add this configuration to the Hive documentation ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]