[
https://issues.apache.org/jira/browse/SPARK-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679560#comment-16679560
]
Eyal Farago commented on SPARK-24437:
-------------------------------------
[~dvogelbacher], what about the _checkpoint_ approach?
another possibility: if the queries are actually rather small can you force
them into memory and then convert them into _DataSet_s and cache these? this
way you're getting completely rid of the broadcasts and lineage, effectively
storing what you need. this still has a minor drawback as your DataSets are now
built on top of a parallelized collection RDD which still has a memory
footprint in the driver's heap.
re. your question about why is the broadcast being kept as part of the lineage,
it'd require a long trip down the rabbit hole to understand the way the plan is
being transformed and represented once being cached... as you wrote yourself
this is a rather unusual use-case so it might require unusual handling on your
side...
> Memory leak in UnsafeHashedRelation
> -----------------------------------
>
> Key: SPARK-24437
> URL: https://issues.apache.org/jira/browse/SPARK-24437
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: gagan taneja
> Priority: Critical
> Attachments: Screen Shot 2018-05-30 at 2.05.40 PM.png, Screen Shot
> 2018-05-30 at 2.07.22 PM.png, Screen Shot 2018-11-01 at 10.38.30 AM.png
>
>
> There seems to memory leak with
> org.apache.spark.sql.execution.joins.UnsafeHashedRelation
> We have a long running instance of STS.
> With each query execution requiring Broadcast Join, UnsafeHashedRelation is
> getting added for cleanup in ContextCleaner. This reference of
> UnsafeHashedRelation is being held at some other Collection and not becoming
> eligible for GC and because of this ContextCleaner is not able to clean it.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]