Nathan Howell created SPARK-2876:
------------------------------------
Summary: RDD.partitionBy loads entire partition into memory
Key: SPARK-2876
URL: https://issues.apache.org/jira/browse/SPARK-2876
Project: Spark
Issue Type: Bug
Components: PySpark
Affects Versions: 1.0.1
Reporter: Nathan Howell
{{RDD.partitionBy}} fails due to an OOM in the PySpark daemon process when
given a relatively large dataset. It seems that the use of
{{BatchedSerializer(UNLIMITED_BATCH_SIZE)}} is suspect, most other RDD methods
use {{self._jrdd_deserializer}}.
{code}
y = x.keyBy(...)
z = y.partitionBy(512) # fails
z = y.repartition(512) # succeeds
{code}
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]