Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17964#discussion_r116306324
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -429,17 +429,13 @@ object QueryPlan {
* with its referenced ordinal from input attributes. It's similar to
`BindReferences` but we
* do not use `BindReferences` here as the plan may take the expression
as a parameter with type
* `Attribute`, and replace it with `BoundReference` will cause error.
+ * Note that, we may have missing attributes, e.g. in the final
aggregate of 2-phase aggregation,
+ * we should normalize missing attributes too, with expr id -1.
*/
def normalizeExprId[T <: Expression](e: T, input: AttributeSeq): T = {
e.transformUp {
case s: SubqueryExpression => s.canonicalize(input)
- case ar: AttributeReference =>
- val ordinal = input.indexOf(ar.exprId)
- if (ordinal == -1) {
- ar
- } else {
- ar.withExprId(ExprId(ordinal))
- }
+ case ar: AttributeReference =>
ar.withExprId(ExprId(input.indexOf(ar.exprId)))
--- End diff --
Ok, so this works because of the way we plan aggregates and I am totally
fine with this.
I am slightly worried about non-complete aggregate expression that cannot
be resolved and wreak havok further down the line because `sameResult` falsely
evaluated to `true`. Can we special case non-complete aggregate expressions?
From an architectural point of view it might be better to add this as a
normalize function to Expression.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]