Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/21291#discussion_r188953639
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ConfigBehaviorSuite.scala ---
@@ -39,7 +39,9 @@ class ConfigBehaviorSuite extends QueryTest with
SharedSQLContext {
def computeChiSquareTest(): Double = {
val n = 10000
// Trigger a sort
- val data = spark.range(0, n, 1, 1).sort('id.desc)
+ // Range has range partitioning in its output now. To have a range
shuffle, we
+ // need to run a repartition first.
+ val data = spark.range(0, n, 1, 1).repartition(10).sort('id.desc)
--- End diff --
This test uses `SQLConf.RANGE_EXCHANGE_SAMPLE_SIZE_PER_PARTITION` to change
sample size per partition and check the chi-sq value. It samples just 1 point
so the chi-sq value is expected to be high.
If we change it from 1 to 10 partition, the chi-sq value will changed too.
Should we do this?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]