[
https://issues.apache.org/jira/browse/SPARK-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094987#comment-14094987
]
Vlad Frolov commented on SPARK-1065:
------------------------------------
[~davies] I understand that if you use broadcast explicitly the closure won't
be huge, but the point of that PR was also "1. Users won't need to decide what
to broadcast anymore, unless they would want to use a large object multiple
times in different operations".
> PySpark runs out of memory with large broadcast variables
> ---------------------------------------------------------
>
> Key: SPARK-1065
> URL: https://issues.apache.org/jira/browse/SPARK-1065
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 0.7.3, 0.8.1, 0.9.0
> Reporter: Josh Rosen
> Assignee: Davies Liu
>
> PySpark's driver components may run out of memory when broadcasting large
> variables (say 1 gigabyte).
> Because PySpark's broadcast is implemented on top of Java Spark's broadcast
> by broadcasting a pickled Python as a byte array, we may be retaining
> multiple copies of the large object: a pickled copy in the JVM and a
> deserialized copy in the Python driver.
> The problem could also be due to memory requirements during pickling.
> PySpark is also affected by broadcast variables not being garbage collected.
> Adding an unpersist() method to broadcast variables may fix this:
> https://github.com/apache/incubator-spark/pull/543.
> As a first step to fixing this, we should write a failing test to reproduce
> the error.
> This was discovered by [~sandy]: ["trouble with broadcast variables on
> pyspark"|http://apache-spark-user-list.1001560.n3.nabble.com/trouble-with-broadcast-variables-on-pyspark-tp1301.html].
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]