Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19488#discussion_r144584455
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -205,14 +205,17 @@ object PhysicalAggregation {
case logical.Aggregate(groupingExpressions, resultExpressions, child)
=>
// A single aggregate expression might appear multiple times in
resultExpressions.
// In order to avoid evaluating an individual aggregate function
multiple times, we'll
- // build a set of the distinct aggregate expressions and build a
function which can
+ // build a map of the distinct aggregate expressions and build a
function which can
// be used to re-write expressions so that they reference the single
copy of the
- // aggregate function which actually gets computed.
- val aggregateExpressions = resultExpressions.flatMap { expr =>
+ // aggregate function which actually gets computed. Note that
aggregate expressions
+ // should always be deterministic, so we can use its canonicalized
expression as its
--- End diff --
can you point out where the code is that guarantee this? I took a look at
`CheckAnalysis` and seems Spark allow Aggregate to have non-deterministic
expressions.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]