linhongliu-db commented on a change in pull request #35653:
URL: https://github.com/apache/spark/pull/35653#discussion_r815967083
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala
##########
@@ -614,7 +614,11 @@ object ViewHelper extends SQLConfHelper with Logging {
}.getOrElse(false)
if (replace && uncache) {
logDebug(s"Try to uncache ${name.quotedString} before replacing.")
- checkCyclicViewReference(analyzedPlan, Seq(name), name)
Review comment:
@stczwd, the community did a lot of work in view implementation in 3.1
and 3.2. The view's behavior makes more sense now. But it's also a little bit
more confusing now because we have several kinds of view implementations now.
1. Permanent view created by SQL DDL
2. Temp view created by SQL DDL (from 3.1, this view now is represented with
SQL text)
3. Temp view created by SQL DDL with storeAnalyzedPlanForView = true
(behavior before 3.1, same with view 4)
4. Temp view created by Dataset API (until now, this view is still
represented with analyzed logical plan)
To answer your question
> If we won't suport this, then why we change it for backward compatibility?
We don't support recursive view for view created by SQL DDL, but we DO
support this for view created by dataset API.
That's why we add this condition here: `!conf.storeAnalyzedPlanForView &&
originalText.nonEmpty`.
It means only check the cyclic reference for view 1 and 2
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]