HeartSaVioR commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r490724544
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -300,54 +301,53 @@ final class DataStreamWriter[T] private[sql](ds:
Dataset[T]) {
"write files of Hive data source directly.")
}
- if (source == "memory") {
- assertNotPartitioned("memory")
+ if (source == SOURCE_NAME_TABLE) {
+ assertNotPartitioned(SOURCE_NAME_TABLE)
+
+ import df.sparkSession.sessionState.analyzer.CatalogAndIdentifier
+
+ import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._
+ val CatalogAndIdentifier(catalog, identifier) =
df.sparkSession.sessionState.sqlParser
+ .parseMultipartIdentifier(tableName)
+
+ // Currently we don't create a logical streaming writer node in logical
plan, so cannot rely
+ // on analyzer to resolve it. Directly lookup only for temp view to
provide clearer message.
+ // TODO (SPARK-27484): we should add the writing node before the plan is
analyzed.
+ if (isTempView(df.sparkSession, identifier.asMultipartIdentifier)) {
Review comment:
Please correct me if I'm missing here. The reason I pass all parts in
identifier here is to cover global temp view, which uses global temp db.
Dropping the db name (if it isn't from global temp db) is performed in
`isTempView`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]