sunchao commented on a change in pull request #30211:
URL: https://github.com/apache/spark/pull/30211#discussion_r520028366



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DropTableExec.scala
##########
@@ -26,13 +27,16 @@ import org.apache.spark.sql.connector.catalog.{Identifier, 
TableCatalog}
  * Physical plan node for dropping a table.
  */
 case class DropTableExec(
+    session: SparkSession,
     catalog: TableCatalog,
     ident: Identifier,
     ifExists: Boolean,
     purge: Boolean) extends V2CommandExec {
 
   override def run(): Seq[InternalRow] = {
     if (catalog.tableExists(ident)) {
+      val table = catalog.loadTable(ident)
+      session.sharedState.cacheManager.uncacheV2Table(table)

Review comment:
       Thanks @cloud-fan . Originally I used the first option but since this is 
called irrespective whether the target table is cached or not, it means we need 
to handle streaming tables as well. And since `SparkSession.table` always 
create `UnresolvedRelation` with `isStreaming = false`, the call will fail 
later on with those.
   
   I think the second option is promising. Will try that. Thanks.
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to