c21 commented on a change in pull request #29804:
URL: https://github.com/apache/spark/pull/29804#discussion_r495702507
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -951,6 +951,14 @@ object SQLConf {
.checkValue(_ > 0, "the value of spark.sql.sources.bucketing.maxBuckets
must be greater than 0")
.createWithDefault(100000)
+ val AUTO_BUCKETED_SCAN_ENABLED =
+ buildConf("spark.sql.sources.bucketing.autoBucketedScan.enabled")
+ .doc("When true, decide whether to do bucketed scan on input tables
based on query plan " +
+ "automatically.")
+ .version("3.1.0")
Review comment:
@maropu - sure, just for my own education, what does it indicate to make
a config internal/external?
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -951,6 +951,14 @@ object SQLConf {
.checkValue(_ > 0, "the value of spark.sql.sources.bucketing.maxBuckets
must be greater than 0")
.createWithDefault(100000)
+ val AUTO_BUCKETED_SCAN_ENABLED =
+ buildConf("spark.sql.sources.bucketing.autoBucketedScan.enabled")
+ .doc("When true, decide whether to do bucketed scan on input tables
based on query plan " +
Review comment:
@maropu - sure, wondering what do you think of below?
```
When true, decide whether to do bucketed scan on input tables
based on query plan automatically. Do not use bucketed scan if
(1).query does not have operators to utilize bucketing (e.g. join, group-by,
etc),
or (2).there's an exchange operator between these operators and table scan.
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]