manuzhang commented on a change in pull request #26434: [SPARK-29544] [SQL]
optimize skewed partition based on data size
URL: https://github.com/apache/spark/pull/26434#discussion_r358159260
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala
##########
@@ -552,4 +577,155 @@ class AdaptiveQueryExecSuite
spark.sparkContext.removeSparkListener(listener)
}
}
+
+ test("adaptive skew join both in left and right for inner join ") {
+ withSQLConf(
+ SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> "true",
+ SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> "-1",
+ SQLConf.ADAPTIVE_EXECUTION_SKEWED_PARTITION_FACTOR.key -> "1",
+ SQLConf.ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD.key -> "100",
+ SQLConf.SHUFFLE_TARGET_POSTSHUFFLE_INPUT_SIZE.key -> "2000") {
+ val (plan, adaptivePlan) = runAdaptiveAndVerifyResult(
+ "SELECT * FROM skewData1 join skewData2 ON key1 = key2")
+ val smj = findTopLevelSortMergeJoin(plan)
+ assert(smj.size == 1)
+ // left stats: [4403, 0, 1927, 1927, 1927]
Review comment:
I think it's because `ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD` is
set to 1 but it feels like cheating. Also, I think we need a test where skewed
join optimization is not applied when either the partition factor or partition
size threshold is not satisfied.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]