cloud-fan commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r490696111



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -300,54 +301,44 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
         "write files of Hive data source directly.")
     }
 
-    if (source == "memory") {
+    if (source == SOURCE_NAME_TABLE) {
+      assertNotPartitioned("table")
+
+      import df.sparkSession.sessionState.analyzer.CatalogAndIdentifier
+
+      import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._
+      val CatalogAndIdentifier(catalog, identifier) = 
df.sparkSession.sessionState.sqlParser

Review comment:
       > (only if the temp view is a single data source scan node)
   
   As I mentioned before, the temp view must be very simple, like 
`spark.table(name)` or `CREATE TEMP VIEW v USING parquet OPTIONS(...)`
   
   I believe there are tests, but I don't remember where they are. You can 
update `ResolveRelations` to drop the support of inserting temp views, and see 
which tests fail.
   
   For this particular PR, I'm OK to not support temp view for now, as we need 
to refactor it a little bit and have a logical plan for streaming write. But 
for consistency with other places that lookup a table, we should still lookup 
temp views, and just fail if a temp view is returned.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to