gengliangwang commented on code in PR #40732:
URL: https://github.com/apache/spark/pull/40732#discussion_r1165002418
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##########
@@ -47,9 +47,10 @@ import org.apache.spark.sql.types._
* (1, 5)
* (4, 6)
*
- * @param catalog the catalog to use for looking up the schema of INSERT INTO
table objects.
+ * @param resolveRelations rule to resolve relations from the catalog. This
should generally map to
+ * the the ResolveRelations analyzer rule.
*/
-case class ResolveDefaultColumns(catalog: SessionCatalog) extends
Rule[LogicalPlan] {
+case class ResolveDefaultColumns(resolveRelations: Rule[LogicalPlan]) extends
Rule[LogicalPlan] {
Review Comment:
Or we can just put the following code(except the pattern match `case i @
InsertIntoStatement...`) as a new method in ResolveRelations
```
case i @ InsertIntoStatement(table, _, _, _, _, _) if i.query.resolved
=>
val relation = table match {
case u: UnresolvedRelation if !u.isStreaming =>
resolveRelation(u).getOrElse(u)
case other => other
}
// Inserting into a file-based temporary view is allowed.
// (e.g., spark.read.parquet("path").createOrReplaceTempView("t").
// Thus, we need to look at the raw plan if `relation` is a
temporary view.
unwrapRelationPlan(relation) match {
case v: View =>
throw
QueryCompilationErrors.insertIntoViewNotAllowedError(v.desc.identifier, table)
case other => i.copy(table = other)
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]