warriersruthi opened a new pull request, #42805:
URL: https://github.com/apache/spark/pull/42805

   ### What changes were proposed in this pull request?
   
   In the method requireDbExist(db) in SessionCatalog.scala, currently, there 
is no check to ensure if it's for non-default db. The intention here is to 
check for other databases that would throw NoSuchDatabaseException if it 
doesn't exist. The 'default' database always exists in the system and cannot be 
dropped by anyone. 
   
   In the case when a Spark 3.1 cluster is created using shared metastore with 
Spark 2.4 cluster, it throws the following alert for default database:
    org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 
'default' not found (state=08S01,code=0)
    
   This needs to be suppressed, as this is misleading because the Spark Thrift 
Server is running fine. 
   
   
   ### Why are the changes needed?
   
   As the 'default' database always exists in the system and cannot be dropped 
by anyone, throwing the NoSuchDatabaseException for the default db is 
misleading. Thus we need to suppress this exception.
   The WARN stacktrace is as follows:
   
   2023-06-06 07:13:34,931 WARN  [] thrift.ThriftCLIService: Error opening 
session: 
   org.apache.hive.service.cli.HiveSQLException: Failed to open new session: 
org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 
'default' not found
       at 
org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:85)
       
   Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: 
Database 'default' not found
       at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireDbExists(SessionCatalog.scala:222)
       at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:326)
       at 
org.apache.spark.sql.connector.catalog.CatalogManager.setCurrentNamespace(CatalogManager.scala:104)
       at 
org.apache.spark.sql.execution.datasources.v2.SetCatalogAndNamespaceExec.$anonfun$run$2(SetCatalogAndNamespaceExec.scala:36)
       at 
org.apache.spark.sql.execution.datasources.v2.SetCatalogAndNamespaceExec.$anonfun$run$2$adapted(SetCatalogAndNamespaceExec.scala:36)
       at scala.Option.foreach(Option.scala:407)
       at 
org.apache.spark.sql.execution.datasources.v2.SetCatalogAndNamespaceExec.run(SetCatalogAndNamespaceExec.scala:36)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:40)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:40)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:46)
       at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
       at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3700)
       at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)
       at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)
       at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)
       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
       at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
       at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3698)
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   
   
   ### Was this patch authored or co-authored using generative AI tooling?
   No
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to