[
https://issues.apache.org/jira/browse/SPARK-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16673949#comment-16673949
]
Eyal Farago commented on SPARK-24437:
-------------------------------------
[~dvogelbacher],
haven't looked to deep into this, but here's two immediate conclusions from the
screen shot you've attached:
# broadcast is referenced from MapPartitionsRDD's f member, this seems
reasonable for a broadcast join.
# the entire thing is cached (CachedRDDBuilder), is it possible you're caching
this DataSet? unlike RDDs DataSet's persistence is manually managed - hence
they're not automatically garbage collected once the last reference is dropped.
having that said, I'd still expect spark to cache only the in-memory
representation and not the entire RDD lineage, so this does look like some sort
of a bug, something like an over capturing function/closure in the caching code.
> Memory leak in UnsafeHashedRelation
> -----------------------------------
>
> Key: SPARK-24437
> URL: https://issues.apache.org/jira/browse/SPARK-24437
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: gagan taneja
> Priority: Critical
> Attachments: Screen Shot 2018-05-30 at 2.05.40 PM.png, Screen Shot
> 2018-05-30 at 2.07.22 PM.png, Screen Shot 2018-11-01 at 10.38.30 AM.png
>
>
> There seems to memory leak with
> org.apache.spark.sql.execution.joins.UnsafeHashedRelation
> We have a long running instance of STS.
> With each query execution requiring Broadcast Join, UnsafeHashedRelation is
> getting added for cleanup in ContextCleaner. This reference of
> UnsafeHashedRelation is being held at some other Collection and not becoming
> eligible for GC and because of this ContextCleaner is not able to clean it.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]