GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/15389
[SPARK-17817][PySpark] PySpark RDD Repartitioning Results in Highly Skewed
Partition Sizes
## What changes were proposed in this pull request?
Quoted from JIRA description:
Calling repartition on a PySpark RDD to increase the number of partitions
results in highly skewed partition sizes, with most having 0 rows. The
repartition method should evenly spread out the rows across the partitions, and
this behavior is correctly seen on the Scala side.
Please reference the following code for a reproducible example of this
issue:
num_partitions = 20000
a = sc.parallelize(range(int(1e6)), 2) # start with 2 even partitions
l = a.repartition(num_partitions).glom().map(len).collect() # get
length of each partition
min(l), max(l), sum(l)/len(l), len(l) # skewed!
In Scala's `repartition` code, we will distribute elements evenly across
output partitions. However, the RDD from Python is serialized as a single
binary data, so the distribution fails. We need to convert the RDD in Python to
java object before repartitioning.
## How was this patch tested?
Jenkins tests.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/viirya/spark-1 pyspark-rdd-repartition
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/15389.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #15389
----
commit be8c509a14506817cce500e845064a2ca7edcc23
Author: Liang-Chi Hsieh <[email protected]>
Date: 2016-10-07T04:59:37Z
Fix pyspark.rdd repartition.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]