Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21859#discussion_r211168107
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1207,6 +1207,13 @@ object SQLConf {
.intConf
.createWithDefault(100)
+ val RANGE_EXCHANGE_SAMPLE_CACHE_ENABLE =
+ buildConf("spark.sql.execution.rangeExchange.sampleCache.enabled")
--- End diff --
I think this is a feature of Spark core instead of Spark SQL.
`RangePartitioner` is in Spark core and we can apply this optimization to
`RDD.sort` as well
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]