yuchenhuo commented on a change in pull request #26957: [SPARK-30314] Add
identifier and catalog information to DataSourceV2Relation
URL: https://github.com/apache/spark/pull/26957#discussion_r366672769
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala
##########
@@ -51,9 +51,14 @@ final class DataFrameWriterV2[T] private[sql](table:
String, ds: Dataset[T])
private val tableName =
sparkSession.sessionState.sqlParser.parseMultipartIdentifier(table)
- private val (catalog, identifier) = {
- val CatalogAndIdentifier(catalog, identifier) = tableName
- (catalog.asTableCatalog, identifier)
+ private val (catalog, catalogIdentifier, tableIdentifier) = {
+ import
df.sparkSession.sessionState.analyzer.{NonSessionCatalogAndIdentifier,
SessionCatalogAndIdentifier}
+ tableName match {
+ case NonSessionCatalogAndIdentifier(catalog, identifier) =>
+ (catalog.asTableCatalog, tableName.headOption, identifier)
+ case SessionCatalogAndIdentifier(catalog, identifier) =>
+ (catalog.asTableCatalog, Some(CatalogManager.SESSION_CATALOG_NAME),
identifier)
Review comment:
Similar to the reason above. I think it's better to always encode the
resolved catalog and table identifiers to avoid inconsistency.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]