[ 
https://issues.apache.org/jira/browse/SPARK-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631081#comment-14631081
 ] 

Tien-Dung LE commented on SPARK-9109:
-------------------------------------

I think the cache is done in purpose so we should keep it. The solution is to 
keep all cached RDDs and unpersist them later (when the graph.unpersist is 
called. 

I can propose a change for that but it would be very kind of you to refer me to 
a document (produce) how to make a change and create a PR.

> Unpersist a graph object does not work properly
> -----------------------------------------------
>
>                 Key: SPARK-9109
>                 URL: https://issues.apache.org/jira/browse/SPARK-9109
>             Project: Spark
>          Issue Type: Bug
>          Components: GraphX
>    Affects Versions: 1.3.1, 1.4.0
>            Reporter: Tien-Dung LE
>            Priority: Minor
>
> Unpersist a graph object does not work properly.
> Here is the code to produce 
> {code}
> import org.apache.spark.graphx._
> import org.apache.spark.rdd.RDD
> import org.slf4j.LoggerFactory
> import org.apache.spark.graphx.util.GraphGenerators
> val graph: Graph[Long, Long] =
>   GraphGenerators.logNormalGraph(sc, numVertices = 100).mapVertices( (id, _) 
> => id.toLong ).mapEdges( e => e.attr.toLong)
>   
> graph.cache().numEdges
> graph.unpersist()
> {code}
> There should not be any cached RDDs in storage 
> (http://localhost:4040/storage/).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to