Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/5848#discussion_r29538466
--- Diff: core/src/main/scala/org/apache/spark/Partitioner.scala ---
@@ -118,7 +118,8 @@ class RangePartitioner[K : Ordering : ClassTag, V](
Array.empty
} else {
// This is the sample size we need to have roughly balanced output
partitions, capped at 1M.
- val sampleSize = math.min(20.0 * partitions, 1e6)
+ val maxSamples =
rdd.sparkContext.getConf.getDouble("spark.partitioner.max_samples", 1e6)
--- End diff --
Not sure how 1e6 was arrived at, but for our jobs at anything > 40k
partitions, for non primitive keys we hit the issue. The actual number of keys
is not too large, which is why lower the value does not affect precision of the
estimate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]