sunchao commented on a change in pull request #31066:
URL: https://github.com/apache/spark/pull/31066#discussion_r553121035



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala
##########
@@ -675,7 +675,7 @@ case class AlterTableRecoverPartitionsCommand(
     // This is always the case for Hive format tables, but is not true for 
Datasource tables created
     // before Spark 2.1 unless they are converted via `msck repair table`.
     spark.sessionState.catalog.alterTable(table.copy(tracksPartitionsInCatalog 
= true))
-    catalog.refreshTable(tableName)
+    spark.catalog.refreshTable(tableIdentWithDB)

Review comment:
       I think there are cases like `alterTableStats` (may be the only one?) 
which only trigger metadata change. In the above, some already uncache the 
table data, although via different paths such as 
`CommandUtils.uncacheTableOrView` or `cacheManager.uncacheQuery`. Also, the 
other `refreshTable` recaches the target table but sometimes seems we just want 
to remove the cache but still refreshes the metadata.
   
   Of course, it will be very helpful if we can simplify the code a bit. It 
also seems there are still quite a few cases where cache is not properly 
handled, both in v1 and v2.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to