Github user hvanhovell commented on a diff in the pull request: https://github.com/apache/spark/pull/16168#discussion_r91051441 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala --- @@ -207,31 +205,56 @@ case class CreateViewCommand( } /** - * Returns a [[CatalogTable]] that can be used to save in the catalog. This comment canonicalize - * SQL based on the analyzed plan, and also creates the proper schema for the view. + * Returns a [[CatalogTable]] that can be used to save in the catalog. This stores the following + * properties for a view: + * 1. The `viewText` which is used to generate a logical plan when we resolve a view; + * 2. The `currentDatabase` which sets the current database on Analyze stage; + * 3. The `schema` which ensure we generate the correct output. */ private def prepareTable(sparkSession: SparkSession, aliasedPlan: LogicalPlan): CatalogTable = { - val viewSQL: String = new SQLBuilder(aliasedPlan).toSQL + val currentDatabase = sparkSession.sessionState.catalog.getCurrentDatabase - // Validate the view SQL - make sure we can parse it and analyze it. - // If we cannot analyze the generated query, there is probably a bug in SQL generation. - try { - sparkSession.sql(viewSQL).queryExecution.assertAnalyzed() - } catch { - case NonFatal(e) => - throw new RuntimeException(s"Failed to analyze the canonicalized SQL: $viewSQL", e) - } + if (originalText.isDefined) { --- End diff -- When does this happen?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org