[
https://issues.apache.org/jira/browse/SPARK-4903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14320439#comment-14320439
]
Yin Huai commented on SPARK-4903:
---------------------------------
I believe that it has been resolved in 1.3 ([see
this|https://github.com/apache/spark/blob/v1.3.0-snapshot1/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala#L61]).
I tried the following snippet in "build/sbt -Phive sparkShell"and verified the
cached RDD was unpersisted after I dropped it.
{code}
sqlContext.jsonRDD(sc.parallelize("""{"a":1}"""::Nil)).registerTempTable("test")
sqlContext.sql("create table jt as select a from test")
sqlContext.sql("cache table jt").collect
sqlContext.sql("select * from jt").collect
sqlContext.sql("drop table jt").collect
{code}
> RDD remains cached after "DROP TABLE"
> -------------------------------------
>
> Key: SPARK-4903
> URL: https://issues.apache.org/jira/browse/SPARK-4903
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.3.0
> Environment: Spark master @ Dec 17
> (3cd516191baadf8496ccdae499771020e89acd7e)
> Reporter: Evert Lammerts
> Priority: Critical
>
> In beeline, when I run:
> {code:sql}
> CREATE TABLE test AS select col from table;
> CACHE TABLE test
> DROP TABLE test
> {code}
> The the table is removed but the RDD is still cached. Running UNCACHE is not
> possible anymore (table not found from metastore).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]