sunchao commented on a change in pull request #31308:
URL: https://github.com/apache/spark/pull/31308#discussion_r564400567



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala
##########
@@ -561,16 +561,9 @@ case class TruncateTableCommand(
         }
       }
     }
-    // After deleting the data, invalidate the table to make sure we don't 
keep around a stale
-    // file relation in the metastore cache.
-    spark.sessionState.refreshTable(tableName.unquotedString)
-    // Also try to drop the contents of the table from the columnar cache
-    try {
-      
spark.sharedState.cacheManager.uncacheQuery(spark.table(table.identifier), 
cascade = true)
-    } catch {
-      case NonFatal(e) =>
-        log.warn(s"Exception when attempting to uncache table 
$tableIdentWithDB", e)
-    }
+    // After deleting the data, refresh the table to make sure we don't keep 
around a stale
+    // file relation in the metastore cache and cached table data in the cache 
manager.
+    spark.catalog.refreshTable(tableIdentWithDB)

Review comment:
       So I wasn't able to reproduce it with the above example, sorry for the 
false alarm. Turned out the analysis exception will be thrown later when the 
cache is actually queried (rather than in `recacheByPlan` itself). Therefore, I 
think it should be fine in this case.
   
   I do agree we should keep it consistent (whether try-catch or not). IMO it 
can be done separately tho.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to