manbuyun commented on a change in pull request #30225:
URL: https://github.com/apache/spark/pull/30225#discussion_r534904370
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -815,6 +815,16 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val HIVE_METASTORE_PARTITION_LIMIT =
+ buildConf("spark.sql.hive.metastorePartitionLimit")
+ .doc("The maximum number of metastore partitions allowed for a given
table. The default " +
+ "value -1 to follow the Hive config (see
HiveConf.METASTORE_LIMIT_PARTITION_REQUEST " +
+ "for more information).")
+ .version("3.1.0")
+ .intConf
+ .checkValue(_ >= -1, "The maximum must be a positive integer, -1 to
follow the Hive config.")
+ .createWithDefault(100000)
Review comment:
@sunchao Thank you for your response. I think this is a reasonable
maximum, and Presto also has a parameter to limit the number of partitions in
HiveMetadata#getPartitionsAsList, default value is 100_000
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]