sunchao commented on a change in pull request #31136:
URL: https://github.com/apache/spark/pull/31136#discussion_r556191960
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala
##########
@@ -396,8 +396,13 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
*/
override def dropTempView(viewName: String): Boolean = {
sparkSession.sessionState.catalog.getTempView(viewName).exists { viewDef =>
- sparkSession.sharedState.cacheManager.uncacheQuery(
- sparkSession, viewDef, cascade = false)
+ try {
+ val plan = sparkSession.sessionState.executePlan(viewDef)
+ sparkSession.sharedState.cacheManager.uncacheQuery(
+ sparkSession, plan.analyzed, cascade = false)
+ } catch {
+ case NonFatal(_) => // ignore
Review comment:
This is a good question. I think currently this won't happen because:
1. when dropping a table or permanent view, we'll drop all the caches with
reference on it in a cascading fashion, so when we are dropping the caches
themselves, they are already invalidated.
2. On the other hand, we currently store temp view as analyzed logical plans
so they won't be analyzed again upon retrieving, which means we won't run into
the error you mentioned. Although, this also means the plans themselves could
become stale and potentially generate incorrect result. #31107 proposes to
change this, following similar changes done by #30567, so the behavior of
temporary view as well as cache is more aligned to that of permanent view.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]