LantaoJin commented on issue #27185: [SPARK-30494][SQL] Avoid duplicated cached 
RDD when replace an existing view
URL: https://github.com/apache/spark/pull/27185#issuecomment-581030414
 
 
   > I think it makes sense, but we should follow how similar things are done 
in DROP TABLE.
   
   I think they are similar with DROP TABLE
   In DropTableCommand:
   
https://github.com/apache/spark/blob/da32d1e6b5cc409f408384576002ccf63a83e9a1/sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala#L239
   ```scala
         try {
           sparkSession.sharedState.cacheManager.uncacheQuery(
             sparkSession.table(tableName), cascade = !isTempView)
         } catch {
           case NonFatal(e) => log.warn(e.toString, e)
         }
   ```
   I added the `sparkSession.catalog.uncacheTable()` in views.scala, 
`uncacheTable()` has the similar logic:
   
https://github.com/apache/spark/blob/69ab94ff24f471783e29cc7853c0eee25ea2d88c/sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala#L114
   ```scala
     override def uncacheTable(tableName: String): Unit = {
       val tableIdent = 
sparkSession.sessionState.sqlParser.parseTableIdentifier(tableName)
       val cascade = !sessionCatalog.isTemporaryTable(tableIdent)
       
sparkSession.sharedState.cacheManager.uncacheQuery(sparkSession.table(tableName),
 cascade)
     }
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to