cloud-fan commented on a change in pull request #28123:
URL: https://github.com/apache/spark/pull/28123#discussion_r433262285
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,26 @@ object SQLConf {
.checkValue(_ > 0, "The timeout value must be positive")
.createWithDefault(10L)
+ val COALESCE_BUCKETS_IN_SORT_MERGE_JOIN_ENABLED =
+ buildConf("spark.sql.bucketing.coalesceBucketsInSortMergeJoin.enabled")
+ .doc("When true, if two bucketed tables with the different number of
buckets are joined, " +
+ "the side with a bigger number of buckets will be coalesced to have
the same number " +
+ "of buckets as the other side. Bucket coalescing is applied only to
sort-merge joins " +
+ "and only when the bigger number of buckets is divisible by the
smaller number of buckets.")
+ .version("3.1.0")
+ .booleanConf
+ .createWithDefault(false)
+
+ val COALESCE_BUCKETS_IN_SORT_MERGE_JOIN_MAX_NUM_BUCKETS_DIFF =
+
buildConf("spark.sql.bucketing.coalesceBucketsInSortMergeJoin.maxNumBucketsDiff")
+ .doc("The difference in count of two buckets being coalesced should be
less than or " +
Review comment:
shall we use the ration not the absolute difference? e.g. coalesce 128
buckets to 1 looks not good. Maybe we can say: we can coalesce the buckets to
at most n times slower.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]