[
https://issues.apache.org/jira/browse/SPARK-24889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yuri Bogomolov updated SPARK-24889:
-----------------------------------
Attachment: image-2018-07-23-10-53-58-474.png
> dataset.unpersist() doesn't update storage memory stats
> -------------------------------------------------------
>
> Key: SPARK-24889
> URL: https://issues.apache.org/jira/browse/SPARK-24889
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.3.0
> Reporter: Yuri Bogomolov
> Priority: Major
> Attachments: image-2018-07-23-10-53-58-474.png
>
>
> Steps to reproduce:
> 1) Start a Spark cluster, and check the storage memory value from the Spark
> Web UI "Executors" tab (it should be equal to zero if you just started)
> 2) Run:
> {code:java}
> val df = spark.sqlContext.range(1, 1000000000)
> df.cache()
> df.count()
> df.unpersist(true){code}
> 3) Check the storage memory value again, now it's equal to 1GB
>
> Looks like the memory is actually released, but stats aren't updated. This
> issue makes cluster management more complicated.
> !image-2018-07-23-10-51-31-140.png!
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]