Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5848#discussion_r29537987
--- Diff: core/src/main/scala/org/apache/spark/Partitioner.scala ---
@@ -118,7 +118,8 @@ class RangePartitioner[K : Ordering : ClassTag, V](
Array.empty
} else {
// This is the sample size we need to have roughly balanced output
partitions, capped at 1M.
- val sampleSize = math.min(20.0 * partitions, 1e6)
+ val maxSamples =
rdd.sparkContext.getConf.getDouble("spark.partitioner.max_samples", 1e6)
--- End diff --
`max_samples` -> `maxSamples`. You are trying to turn *down* the max right?
Just wondering out loud -- was 1e6 arbitrary and probably too big as a max to
begin with? The max would only matter at 50000 partitions, which is huge. Would
a max of... 1e5, 1e4 be more sensible for 99% of cases? I realize there's no
single value that always works but wondering if we can avoid another flag.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]