You can always use sqlContext.uncacheTable to uncache the old table.
​

On Fri, Sep 12, 2014 at 10:33 AM, pankaj.arora <pankajarora.n...@gmail.com>
wrote:

> Hi Patrick,
>
> What if all the data has to be keep in cache all time. If applying union
> result in new RDD then caching this would result into keeping older as well
> as this into memory hence duplicating data.
>
> Below is what i understood from your comment.
>
> sqlContext.cacheTable(existingRDD)// caches the RDD as schema RDD uses
> columnar compression
>
> existingRDD.union(newRDD).registerAsTable("newTable")
>
> sqlContext.cacheTable(newTable) -- duplicated data
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Re-Use-Case-of-mutable-RDD-any-ideas-around-will-help-tp14095p14107.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to