Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15896
```Scala
spark.range(1, 10).toDF("id1").write.format("json").saveAsTable("jt1")
spark.range(1, 10).toDF("id2").write.format("json").saveAsTable("jt2")
sql("CREATE VIEW testView AS SELECT * FROM jt1 JOIN jt2 ON id1 ==
id2")
// Cache is empty at the beginning
assert(spark.sharedState.cacheManager.isEmpty)
sql("CACHE TABLE testView")
assert(spark.catalog.isCached("testView"))
// Cache is not empty
assert(!spark.sharedState.cacheManager.isEmpty)
// drop a table referenced by a cached view
sql("DROP TABLE jt1")
-- So far everything is fine
// Failed to unache the view
val e = intercept[AnalysisException] {
sql("UNCACHE TABLE testView")
}.getMessage
assert(e.contains("Table or view not found: `default`.`jt1`"))
// We are unable to drop it from the cache
assert(!spark.sharedState.cacheManager.isEmpty)
```
@hvanhovell Above is the example.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]