This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.1 by this push:
new c0ad339 [SPARK-35559][TEST] Speed up one test in
AdaptiveQueryExecSuite
c0ad339 is described below
commit c0ad339f7a6520a7840a933977dca8670c8bf83a
Author: Wenchen Fan <[email protected]>
AuthorDate: Fri May 28 12:39:34 2021 -0700
[SPARK-35559][TEST] Speed up one test in AdaptiveQueryExecSuite
### What changes were proposed in this pull request?
I just noticed that `AdaptiveQueryExecSuite.SPARK-34091: Batch shuffle
fetch in AQE partition coalescing` takes more than 10 minutes to finish, which
is unacceptable.
This PR sets the shuffle partitions to 10 in that test, so that the test
can finish with 5 seconds.
### Why are the changes needed?
speed up the test
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
N/A
Closes #32695 from cloud-fan/test.
Authored-by: Wenchen Fan <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 678592a6121a5237f05956e7d9f0565d82d1860a)
Signed-off-by: Dongjoon Hyun <[email protected]>
---
.../apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala
b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala
index f7570c0..c8c4f97 100644
---
a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala
+++
b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala
@@ -1457,7 +1457,7 @@ class AdaptiveQueryExecSuite
test("SPARK-34091: Batch shuffle fetch in AQE partition coalescing") {
withSQLConf(
SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> "true",
- SQLConf.SHUFFLE_PARTITIONS.key -> "10000",
+ SQLConf.SHUFFLE_PARTITIONS.key -> "10",
SQLConf.FETCH_SHUFFLE_BLOCKS_IN_BATCH.key -> "true") {
withTable("t1") {
spark.range(100).selectExpr("id + 1 as
a").write.format("parquet").saveAsTable("t1")
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]