HeartSaVioR commented on a change in pull request #29756:
URL: https://github.com/apache/spark/pull/29756#discussion_r490198954



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -846,9 +847,9 @@ class Analyzer(
    */
   object ResolveTempViews extends Rule[LogicalPlan] {
     def apply(plan: LogicalPlan): LogicalPlan = plan.resolveOperatorsUp {
-      case u @ UnresolvedRelation(ident, _) =>
+      case u @ UnresolvedRelation(ident, _, _) =>
         lookupTempView(ident).getOrElse(u)

Review comment:
       Just curious, is it end user's responsibility to know about whether the 
temp view is from batch or streaming, so that they can correctly call write or 
writeStream? Without thinking of SparkSession.table I assume it's clear as end 
user will match the reader side and writer side clearly (read/write or 
readStream/writeStream), and it looks a bit confusing.
   
   If `DataFrameReader.table` allows streaming temp view, then I guess 
read/writeStream pair is possible which is a bit confusing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to