xuanyuanking commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r490190573



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -300,54 +301,44 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
         "write files of Hive data source directly.")
     }
 
-    if (source == "memory") {
+    if (source == SOURCE_NAME_TABLE) {
+      assertNotPartitioned("table")
+
+      import df.sparkSession.sessionState.analyzer.CatalogAndIdentifier
+
+      import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._
+      val CatalogAndIdentifier(catalog, identifier) = 
df.sparkSession.sessionState.sqlParser
+          .parseMultipartIdentifier(tableName)
+      val tableInstance = catalog.asTableCatalog.loadTable(identifier)

Review comment:
       Ah sorry for the unclear, I mean the behavior when the table does not 
exist. Should we support create a new table and append data into? Seems a 
good-to-have feature. IMO it's the significant difference between the reader 
and the writer side.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to