rdblue commented on a change in pull request #25330: [SPARK-28565][SQL]
DataFrameWriter saveAsTable support for V2 catalogs
URL: https://github.com/apache/spark/pull/25330#discussion_r310296063
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -648,8 +648,13 @@ class Analyzer(
if catalog.isTemporaryTable(ident) =>
u // temporary views take precedence over catalog table names
- case u @ UnresolvedRelation(CatalogObjectIdentifier(Some(catalogPlugin),
ident)) =>
- loadTable(catalogPlugin,
ident).map(DataSourceV2Relation.create).getOrElse(u)
+ case u @ UnresolvedRelation(CatalogObjectIdentifier(maybeCatalog,
ident)) =>
+ // First try loading the table with a loadable catalog, then fallback
to the session
+ // catalog if that exists
+ maybeCatalog.flatMap(loadTable(_, ident))
Review comment:
Yes. A conflict could break queries in both directions when there is a
conflict between namespace `prod` and catalog `prod`:
* If a namespace takes precedence and a new one is created, it will break
queries that use the catalog
* If a catalog takes precedence and a new one is created, it will break
queries that use the namespace
When we wrote the SPIP, the choice was to make catalog take precedence
because:
1. Any user can create a namespace globally in the catalog, possibly
breaking other users
2. Users can't globally create catalogs -- that's done by administrators --
so the impact is limited to their own jobs where they can add Spark configs
3. Global catalogs are created by administrators and this happens more rarely
In short, we expect fewer problems when catalogs take precedence.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]