HeartSaVioR commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r490730813



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -300,54 +301,53 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
         "write files of Hive data source directly.")
     }
 
-    if (source == "memory") {
-      assertNotPartitioned("memory")
+    if (source == SOURCE_NAME_TABLE) {
+      assertNotPartitioned(SOURCE_NAME_TABLE)
+
+      import df.sparkSession.sessionState.analyzer.CatalogAndIdentifier
+
+      import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._
+      val CatalogAndIdentifier(catalog, identifier) = 
df.sparkSession.sessionState.sqlParser
+          .parseMultipartIdentifier(tableName)
+
+      // Currently we don't create a logical streaming writer node in logical 
plan, so cannot rely
+      // on analyzer to resolve it. Directly lookup only for temp view to 
provide clearer message.
+      // TODO (SPARK-27484): we should add the writing node before the plan is 
analyzed.
+      if (isTempView(df.sparkSession, identifier.asMultipartIdentifier)) {

Review comment:
       Thanks for explaining. I see the failing case when catalog "exists" for 
the head of identifier; let me fix it immediately.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to