imback82 commented on a change in pull request #31933:
URL: https://github.com/apache/spark/pull/31933#discussion_r599256804



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala
##########
@@ -62,15 +62,17 @@ case class CreateViewCommand(
     comment: Option[String],
     properties: Map[String, String],
     originalText: Option[String],
-    child: LogicalPlan,
+    analyzedPlan: LogicalPlan,
     allowExisting: Boolean,
     replace: Boolean,
     viewType: ViewType)
   extends RunnableCommand {
 
   import ViewHelper._
 
-  override def innerChildren: Seq[QueryPlan[_]] = Seq(child)
+  override def plansToCheckAnalysis: Seq[LogicalPlan] = Seq(analyzedPlan)

Review comment:
       We need to run checkAnalysis on the analyzed plan, otherwise, for the 
following:
   ```
   sql("CREATE TABLE view_base_table (key int, data varchar(20)) USING PARQUET")
   sql("CREATE VIEW key_dependent_view AS SELECT * FROM view_base_table GROUP 
BY key")
   ```
   , view creation works fine, whereas it should have failed with:
   ```
   org.apache.spark.sql.AnalysisException
   expression 'spark_catalog.default.view_base_table.data' is neither present 
in the group by, nor is it an aggregate function. Add to group by or wrap in 
first() (or first_value) if you don't care which value you get.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to