aokolnychyi commented on code in PR #52599:
URL: https://github.com/apache/spark/pull/52599#discussion_r2446581117
##########
sql/core/src/main/scala/org/apache/spark/sql/classic/Catalog.scala:
##########
@@ -810,20 +811,12 @@ class Catalog(sparkSession: SparkSession) extends
catalog.Catalog {
* @since 2.0.0
*/
override def uncacheTable(tableName: String): Unit = {
- // We first try to parse `tableName` to see if it is 2 part name. If so,
then in HMS we check
Review Comment:
I needed to migrate this logic to uncaching by name. I feel like the
try/catch was redundant. Keep in mind that internally calls
`parseMultipartIdentifier` on `spark.table(tableName)` so it is not different
compared to the old implementation. I see that `getLocalOrGlobalTempView`
already handles multi part names correctly.
It would be great to have another pair of eyes on this one, though.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]