HeartSaVioR commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r490715412



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -300,54 +301,44 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
         "write files of Hive data source directly.")
     }
 
-    if (source == "memory") {
+    if (source == SOURCE_NAME_TABLE) {
+      assertNotPartitioned("table")
+
+      import df.sparkSession.sessionState.analyzer.CatalogAndIdentifier
+
+      import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._
+      val CatalogAndIdentifier(catalog, identifier) = 
df.sparkSession.sessionState.sqlParser

Review comment:
       Done in e7cd27d - now it looks up (global) temp view directly and 
provide error message a bit clearer. Also added relevant tests.
   
   That said, I can't find the logic for fail-back in DataFrameWriterV2. It 
simply looks up with catalog, which temp view will not be found. Do I 
understand correctly, and if then is it a desired/expected behavior?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to