roczei commented on code in PR #37679:
URL: https://github.com/apache/spark/pull/37679#discussion_r973767185
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala:
##########
@@ -286,7 +284,7 @@ class SessionCatalog(
def dropDatabase(db: String, ignoreIfNotExists: Boolean, cascade: Boolean):
Unit = {
val dbName = format(db)
- if (dbName == DEFAULT_DATABASE) {
+ if (dbName == defaultDatabase) {
Review Comment:
@cloud-fan,
If the default database is xyz and the current database is abc, we cannot
drop the xyz database. Here are my validation steps:
1)
Test setup where the default database is the "default" and created the abc
and the xyz databases:
```
$ ./spark-shell --conf spark.sql.catalogImplementation=hive
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
22/09/18 21:23:12 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Spark context Web UI available at http://localhost:4040
Spark context available as 'sc' (master = local[*], app id =
local-1663528992732).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.4.0-SNAPSHOT
/_/
Using Scala version 2.12.16 (OpenJDK 64-Bit Server VM, Java 11.0.16)
Type in expressions to have them evaluated.
Type :help for more information.
scala> spark.sql("create database xyz")
22/09/18 21:23:19 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout
does not exist
22/09/18 21:23:19 WARN HiveConf: HiveConf of name hive.stats.retries.wait
does not exist
22/09/18 21:23:21 WARN ObjectStore: Version information not found in
metastore. hive.metastore.schema.verification is not enabled so recording the
schema version 2.3.0
22/09/18 21:23:21 WARN ObjectStore: setMetaStoreSchemaVersion called but
recording version is disabled: version = 2.3.0, comment = Set by MetaStore
[email protected]
22/09/18 21:23:21 WARN ObjectStore: Failed to get database default,
returning NoSuchObjectException
22/09/18 21:23:21 WARN ObjectStore: Failed to get database xyz, returning
NoSuchObjectException
22/09/18 21:23:21 WARN ObjectStore: Failed to get database xyz, returning
NoSuchObjectException
22/09/18 21:23:21 WARN ObjectStore: Failed to get database global_temp,
returning NoSuchObjectException
22/09/18 21:23:21 WARN ObjectStore: Failed to get database xyz, returning
NoSuchObjectException
res0: org.apache.spark.sql.DataFrame = []
scala> spark.sql("create database abc")
22/09/18 21:23:40 WARN ObjectStore: Failed to get database abc, returning
NoSuchObjectException
22/09/18 21:23:40 WARN ObjectStore: Failed to get database abc, returning
NoSuchObjectException
22/09/18 21:23:40 WARN ObjectStore: Failed to get database abc, returning
NoSuchObjectException
res1: org.apache.spark.sql.DataFrame = []
scala> spark.sql("show databases").show()
+---------+
|namespace|
+---------+
| abc|
| default|
| xyz|
+---------+
scala> :quit
```
2)
```
$ ./spark-shell --conf spark.sql.catalogImplementation=hive --conf
spark.sql.catalog.spark_catalog.defaultDatabase=xyz
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
22/09/18 21:24:05 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Spark context Web UI available at http://localhost:4040
Spark context available as 'sc' (master = local[*], app id =
local-1663529046120).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.4.0-SNAPSHOT
/_/
Using Scala version 2.12.16 (OpenJDK 64-Bit Server VM, Java 11.0.16)
Type in expressions to have them evaluated.
Type :help for more information.
scala> spark.sql("show databases").show()
22/09/18 21:24:11 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout
does not exist
22/09/18 21:24:11 WARN HiveConf: HiveConf of name hive.stats.retries.wait
does not exist
22/09/18 21:24:13 WARN ObjectStore: Version information not found in
metastore. hive.metastore.schema.verification is not enabled so recording the
schema version 2.3.0
22/09/18 21:24:13 WARN ObjectStore: setMetaStoreSchemaVersion called but
recording version is disabled: version = 2.3.0, comment = Set by MetaStore
[email protected]
+---------+
|namespace|
+---------+
| abc|
| default|
| xyz|
+---------+
scala> spark.sql("use database abc")
22/09/18 21:42:11 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout
does not exist
22/09/18 21:42:11 WARN HiveConf: HiveConf of name hive.stats.retries.wait
does not exist
22/09/18 21:42:12 WARN ObjectStore: Version information not found in
metastore. hive.metastore.schema.verification is not enabled so recording the
schema version 2.3.0
22/09/18 21:42:12 WARN ObjectStore: setMetaStoreSchemaVersion called but
recording version is disabled: version = 2.3.0, comment = Set by MetaStore
[email protected]
22/09/18 21:42:12 WARN ObjectStore: Failed to get database global_temp,
returning NoSuchObjectException
res0: org.apache.spark.sql.DataFrame = []
scala> spark.sql("SELECT current_database() AS db").show()
+---+
| db|
+---+
|abc|
+---+
scala> spark.sql("drop database xyz")
org.apache.spark.sql.AnalysisException: Can not drop default database
at
org.apache.spark.sql.errors.QueryCompilationErrors$.cannotDropDefaultDatabaseError(QueryCompilationErrors.scala:635)
at
org.apache.spark.sql.catalyst.catalog.SessionCatalog.dropDatabase(SessionCatalog.scala:288)
at
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.dropNamespace(V2SessionCatalog.scala:300)
at
org.apache.spark.sql.execution.datasources.v2.DropNamespaceExec.run(DropNamespaceExec.scala:42)
at
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
at
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
at
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
at
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
at
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:505)
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:97)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:505)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]