cloud-fan commented on a change in pull request #26805: [SPARK-15616][SQL] Add 
optimizer rule PruneHiveTablePartitions
URL: https://github.com/apache/spark/pull/26805#discussion_r363162339
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##########
 @@ -1375,6 +1375,16 @@ object SQLConf {
     .booleanConf
     .createWithDefault(false)
 
+  val FALL_BACK_TO_HDFS_FOR_STATS_MAX_PART_NUM =
+    buildConf("spark.sql.statistics.fallBackToHdfs.maxPartitionNum")
+    .doc("If the number of table partitions exceed this value, falling back to 
hdfs " +
+      "for statistics calculation is not allowed. This is used to avoid 
calculating " +
+      "the size of a large number of partitions through hdfs, which is very 
time consuming." +
+      "Setting this value to 0 or negative will disable falling back to hdfs 
for " +
+      "partition statistic calculation.")
 
 Review comment:
   If this is a common problem, let's leave it here and open a new PR to fix it 
completely later.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to