sunchao commented on a change in pull request #30225:
URL: https://github.com/apache/spark/pull/30225#discussion_r535422867



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -815,6 +815,16 @@ object SQLConf {
       .booleanConf
       .createWithDefault(true)
 
+  val HIVE_METASTORE_PARTITION_LIMIT =
+    buildConf("spark.sql.hive.metastorePartitionLimit")
+      .doc("The maximum number of metastore partitions allowed for a given 
table. The default " +
+           "value -1 to follow the Hive config (see 
HiveConf.METASTORE_LIMIT_PARTITION_REQUEST " +
+           "for more information).")
+      .version("3.1.0")
+      .intConf
+      .checkValue(_ >= -1, "The maximum must be a positive integer, -1 to 
follow the Hive config.")
+      .createWithDefault(100000)

Review comment:
       Yea the default value 100_000 looks fine to me. My main question is 
whether we need to make the default value to be that and double the HMS calls. 
Presto doesn't call `getNumPartitionsByFilter` it seems as it streams through a 
partition iterator and stops once the threshold is reached.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to