rdblue commented on a change in pull request #25330: [SPARK-28565][SQL] 
DataFrameWriter saveAsTable support for V2 catalogs
URL: https://github.com/apache/spark/pull/25330#discussion_r313067388
 
 

 ##########
 File path: sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
 ##########
 @@ -374,8 +375,12 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
     df.sparkSession.sessionState.sqlParser.parseMultipartIdentifier(tableName) 
match {
       case CatalogObjectIdentifier(Some(catalog), ident) =>
         insertInto(catalog, ident)
+      // TODO(SPARK-28667): Support the V2SessionCatalog
 
 Review comment:
   > Yes, for both v1 and v2 provider. So we still need a rule to check the 
table provider, and either unwrap it to a UnresolvedCatalogRelation, or create 
the actual v2 table.
   
   This should be done in `loadTable`. The original implementation called 
`TableProvider` to create the table, but this path is currently broken. I think 
that `TableProvider` needs to be improved so that it works instead of adding 
more rules to convert from `CatalogTableAsV2`. 
   
   The catalog should return the correct `Table` instance for all v2 tables. 
Spark shouldn't convert between v2 table instances.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to