Imran Rashid created SPARK-5785:
-----------------------------------
Summary: Pyspark does not support narrow dependencies
Key: SPARK-5785
URL: https://issues.apache.org/jira/browse/SPARK-5785
Project: Spark
Issue Type: Improvement
Components: PySpark
Reporter: Imran Rashid
joins (& cogroups etc.) are always considered to have "wide" dependencies in
pyspark, they are never narrow. This can cause unnecessary shuffles. eg.,
this simple job should shuffle rddA & rddB once each, but it also will do a
third shuffle of the unioned data:
{code}
rddA = sc.parallelize(range(100)).map(lambda x: (x,x)).partitionBy(64)
rddB = sc.parallelize(range(100)).map(lambda x: (x,x)).partitionBy(64)
joined = rddA.join(rddB)
joined.count()
>>> rddA._partitionFunc == rddB._partitionFunc
True
{code}
(Or the docs should somewhere explain that this feature is missing from spark.)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]