fuwhu commented on a change in pull request #26805: [SPARK-15616][SQL] Add
optimizer rule PruneHiveTablePartitions
URL: https://github.com/apache/spark/pull/26805#discussion_r363016268
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -1375,6 +1375,16 @@ object SQLConf {
.booleanConf
.createWithDefault(false)
+ val FALL_BACK_TO_HDFS_FOR_STATS_MAX_PART_NUM =
+ buildConf("spark.sql.statistics.fallBackToHdfs.maxPartitionNum")
+ .doc("If the number of table partitions exceed this value, falling back to
hdfs " +
+ "for statistics calculation is not allowed. This is used to avoid
calculating " +
+ "the size of a large number of partitions through hdfs, which is very
time consuming." +
+ "Setting this value to 0 or negative will disable falling back to hdfs
for " +
+ "partition statistic calculation.")
Review comment:
Yes, in PruneFileSourcePartitions, it also may lead to calculating size of
large number of partitions through hdfs.
I will create a follow-up PR to refine it after this PR finished.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]