GitHub user holdenk opened a pull request:
https://github.com/apache/spark/pull/8569
[SPARK-9821][PYSPARK] pyspark-reduceByKey-should-take-a-custom-partitioner
In Scala, I can supply a custom partitioner to reduceByKey (and other
aggregation/repartitioning methods like aggregateByKey and combinedByKey), but
as far as I can tell from the Pyspark API, there's no way to do the same in
Python.
Here's an example of my code in Scala:
weblogs.map(s => (getFileType(s), 1)).reduceByKey(new
FileTypePartitioner(),_+_)
But I can't figure out how to do the same in Python. The closest I can get
is to call repartition before reduceByKey like so:
weblogs.map(lambda s: (getFileType(s),
1)).partitionBy(3,hash_filetype).reduceByKey(lambda v1,v2: v1+v2).collect()
But that defeats the purpose, because I'm shuffling twice instead of once,
so my performance is worse instead of better.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/holdenk/spark
SPARK-9821-pyspark-reduceByKey-should-take-a-custom-partitioner
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/8569.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #8569
----
commit 8d272b3bf84a72c66c1529d2679d465038435f83
Author: Holden Karau <[email protected]>
Date: 2015-09-02T07:27:48Z
Add partitioner function
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]