[
https://issues.apache.org/jira/browse/SPARK-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-2876.
-------------------------------
Resolution: Fixed
Fix Version/s: 1.1.0
> RDD.partitionBy loads entire partition into memory
> --------------------------------------------------
>
> Key: SPARK-2876
> URL: https://issues.apache.org/jira/browse/SPARK-2876
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.0.1
> Reporter: Nathan Howell
> Fix For: 1.1.0
>
>
> {{RDD.partitionBy}} fails due to an OOM in the PySpark daemon process when
> given a relatively large dataset. It seems that the use of
> {{BatchedSerializer(UNLIMITED_BATCH_SIZE)}} is suspect, most other RDD
> methods use {{self._jrdd_deserializer}}.
> {code}
> y = x.keyBy(...)
> z = y.partitionBy(512) # fails
> z = y.repartition(512) # succeeds
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]