imback82 commented on a change in pull request #26741: [SPARK-30104][SQL] Fix 
catalog resolution for 'global_temp'
URL: https://github.com/apache/spark/pull/26741#discussion_r355269076
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/LookupCatalog.scala
 ##########
 @@ -133,7 +133,11 @@ private[sql] trait LookupCatalog extends Logging {
         // For example, if the name of a custom catalog is the same with 
`GLOBAL_TEMP_DATABASE`,
         // this custom catalog can't be accessed.
         if (nameParts.head.equalsIgnoreCase(globalTempDB)) {
 
 Review comment:
   > ``` 
   > if (nameParts.length == 1) {
   >   Some((currentCatalog, currentNamespace ++ nameParts))
   > }
   > ```
   Look like we cannot use the table look up logic since we have a command like 
`SHOW TABLES FROM testcat` where `testcat` needs to be resolved as a catalog.
   
   Basically, we have a conflict when one part is given:
   1. It needs to be resolved as a catalog:
   ```
   CREATE TABLE testcat.table (id bigint, data string) USING foo
   SHOW TABLES FROM testcat
   ```
   2. It needs to be resolved as a non-catalog multi parts:
   ```
   CREATE TABLE testcat.testcat (id bigint, data string) USING foo
   USE testcat
   DESCRIBE TABLE testcat
   ```
   One way to resolve the conflict is to check the one part name against the 
current catalog. If they are the same, the one part name is used as 
`Identifier` (case 2), otherwise, it is used as a catalog (case 1).  Does this 
sound reasonable?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to