amaliujia commented on code in PR #37287:
URL: https://github.com/apache/spark/pull/37287#discussion_r938088564
##########
R/pkg/tests/fulltests/test_sparkSQL.R:
##########
@@ -4098,14 +4098,13 @@ test_that("catalog APIs, listTables, getTable,
listColumns, listFunctions, funct
c("name", "description", "dataType", "nullable", "isPartition",
"isBucket"))
expect_equal(collect(c)[[1]][[1]], "speed")
expect_error(listColumns("zxwtyswklpf", "default"),
- paste("Error in listColumns : analysis error - Table",
- "'zxwtyswklpf' does not exist in database 'default'"))
+ paste("Table or view not found:
spark_catalog.default.zxwtyswklpf"))
Review Comment:
This actually is a user behavior change as it returns a different error
message now?
##########
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala:
##########
@@ -33,36 +33,37 @@ import org.apache.spark.storage.StorageLevel
abstract class Catalog {
/**
- * Returns the current default database in this session.
+ * Returns the current database (namespace) in this session.
Review Comment:
there is a different way to refer to here: schema.
Are we have decided to use namespace in Spark?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]