we simply hold on to the reference to the rdd after it has been cached. so
we have a single Map[String, RDD[X]] for cached RDDs for the application


On Wed, Jul 9, 2014 at 11:00 AM, premdass <premdas...@yahoo.co.in> wrote:

> Hi,
>
> Yes . I am  caching the RDD's by calling cache method..
>
>
> May i ask, how you are sharing RDD's across jobs in same context? By the
> RDD
> name. I tried printing the RDD's of the Spark context, and when the
> referenceTracking is enabled, i get empty list after the clean up.
>
> Thanks,
> Prem
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/RDD-Cleanup-tp9182p9191.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to