imback82 commented on a change in pull request #26684: [SPARK-30001][SQL]
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r354623705
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -777,36 +772,84 @@ class Analyzer(
}
def apply(plan: LogicalPlan): LogicalPlan =
ResolveTables(plan).resolveOperatorsUp {
- case i @ InsertIntoStatement(u @
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
- if child.resolved =>
- EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+ case i @ InsertIntoStatement(
+ u @ UnresolvedRelation(CatalogObjectIdentifier(catalog, ident)), _,
_, _, _)
+ if i.query.resolved && CatalogV2Util.isSessionCatalog(catalog) =>
+ val relation = ResolveTempViews(u) match {
Review comment:
https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/sources/InsertSuite.scala#L87
is one example, where temp view resolution is required. Maybe the confusion is
that `InsertIntoStatement` is used for `INSERT OVERWRITE`, `INSERT INTO`, etc.?
> It seems to me that it should insert into the table default.t1 because it
doesn't make sense to insert into the temp view.
I think it should still resolve to temp view (for consistent lookup
behavior), but fails during analysis check, which is the current behavior.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]