cloud-fan commented on code in PR #38851:
URL: https://github.com/apache/spark/pull/38851#discussion_r1037758192


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -2109,6 +2110,51 @@ class Analyzer(override val catalogManager: 
CatalogManager)
     }
   }
 
+  /**
+   * Resolves `UnresolvedAttribute` to `OuterReference` if we are resolving 
subquery plans (when
+   * `AnalysisContext.get.outerPlan` is set).
+   */
+  object ResolveOuterReferences extends Rule[LogicalPlan] {
+    override def apply(plan: LogicalPlan): LogicalPlan = {
+      // Only apply this rule if we are resolving subquery plans.
+      if (AnalysisContext.get.outerPlan.isEmpty) return plan
+
+      // We must run these 3 rules first, as they also resolve 
`UnresolvedAttribute` and have
+      // higher priority than outer reference resolution.
+      val prepared = 
ResolveAggregateFunctions(ResolveMissingReferences(ResolveReferences(plan)))

Review Comment:
   Yes, this is not a perfect solution, but AFAIK this is the only reliable way 
to guarantee rule execution order. The best solution in my opinion is to 
centralize all column resolution code in one rule, but that's a much larger 
change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to