Yuri Bogomolov created SPARK-24889:
--------------------------------------

             Summary: dataset.unpersist() doesn't update storage memory stats
                 Key: SPARK-24889
                 URL: https://issues.apache.org/jira/browse/SPARK-24889
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.3.0
            Reporter: Yuri Bogomolov


Steps to reproduce:

1) Start a Spark cluster, and check the storage memory value from the Spark Web 
UI "Executors" tab (it should be equal to zero if you just started)

2) Run:
{code:java}
val df = spark.sqlContext.range(1, 1000000000)
df.cache()
df.count()
df.unpersist(true){code}
3) Check the storage memory value again, now it's equal to 1GB

 

Looks like the memory is actually released, but stats aren't updated. This 
issue makes cluster management more complicated.

!image-2018-07-23-10-51-31-140.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to