Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/22161#discussion_r211490349
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -3613,11 +3613,11 @@ test_that("Collect on DataFrame when NAs exists at
the top of a timestamp column
test_that("catalog APIs, currentDatabase, setCurrentDatabase,
listDatabases", {
expect_equal(currentDatabase(), "default")
expect_error(setCurrentDatabase("default"), NA)
- expect_error(setCurrentDatabase("foo"),
- "Error in setCurrentDatabase : analysis error - Database
'foo' does not exist")
+ expect_error(setCurrentDatabase("zxwtyswklpf"),
+ "Error in setCurrentDatabase : analysis error - Database
'zxwtyswklpf' does not exist")
dbs <- collect(listDatabases())
expect_equal(names(dbs), c("name", "description", "locationUri"))
- expect_equal(dbs[[1]], "default")
+ expect_equal(which(dbs[, 1] == "default"), 1)
--- End diff --
@felixcheung Thanks for reviewing. Actually the name "default" for default
database is fixed, to the best of my knowledge. Its even hardcoded
[here](https://github.com/apache/spark/blob/c3be2cd347c42972d9c499b6fd9a6f988f80af12/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala#L45-47).
So we should be okay with this check. Please let me know what you think.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]