roczei commented on code in PR #37679:
URL: https://github.com/apache/spark/pull/37679#discussion_r978187565


##########
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala:
##########
@@ -148,13 +148,18 @@ private[sql] class SharedState(
     val externalCatalog = SharedState.reflect[ExternalCatalog, SparkConf, 
Configuration](
       SharedState.externalCatalogClassName(conf), conf, hadoopConf)
 
-    val defaultDbDefinition = CatalogDatabase(
-      SessionCatalog.DEFAULT_DATABASE,
-      "default database",
-      CatalogUtils.stringToURI(conf.get(WAREHOUSE_PATH)),
-      Map())
     // Create default database if it doesn't exist
-    if (!externalCatalog.databaseExists(SessionCatalog.DEFAULT_DATABASE)) {
+    // If database name not equals 'default', throw exception
+    if (!externalCatalog.databaseExists(SQLConf.get.defaultDatabase)) {
+      if (SessionCatalog.DEFAULT_DATABASE != SQLConf.get.defaultDatabase) {
+        throw new SparkException(s"Default catalog database 
'${SQLConf.get.defaultDatabase}' " +
+          s"not exist, please create it first or change default database to 
'default'.")

Review Comment:
   Here is a validation:
   
   ```
   $ bin/spark-shell --conf 
spark.sql.catalog.spark_catalog.defaultDatabase=other_db
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
   22/09/23 04:00:12 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
   Spark context Web UI available at http://localhost:4040
   Spark context available as 'sc' (master = local[*], app id = 
local-1663898413533).
   Spark session available as 'spark'.
   Welcome to
         ____              __
        / __/__  ___ _____/ /__
       _\ \/ _ \/ _ `/ __/  '_/
      /___/ .__/\_,_/_/ /_/\_\   version 3.4.0-SNAPSHOT
         /_/
            
   Using Scala version 2.12.16 (OpenJDK 64-Bit Server VM, Java 11.0.16)
   Type in expressions to have them evaluated.
   Type :help for more information.
   
   scala> spark.sql("show databases").show()
   org.apache.spark.SparkDefaultCatalogDatabaseNotExistsException: 
[DEFAULT_CATALOG_DATABASE_NOT_EXISTS] Default catalog database other_db not 
exist, please create it first or change default database to 'default'.
     at 
org.apache.spark.sql.errors.QueryCompilationErrors$.defaultCatalogDatabaseNotExistsError(QueryCompilationErrors.scala:642)
     at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:156)
     at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:147)
     at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.$anonfun$catalog$1(BaseSessionStateBuilder.scala:154)
     at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:123)
     at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:123)
     at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.listDatabases(SessionCatalog.scala:324)
     at 
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.listNamespaces(V2SessionCatalog.scala:232)
     at 
org.apache.spark.sql.execution.datasources.v2.ShowNamespacesExec.run(ShowNamespacesExec.scala:42)```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to