Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/9406#discussion_r44207772
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Utils.scala
---
@@ -213,3 +216,178 @@ object Utils {
case other => None
}
}
+
+/**
+ * This rule rewrites an aggregate query with multiple distinct clauses
into an expanded double
+ * aggregation in which the regular aggregation expressions and every
distinct clause is aggregated
+ * in a separate group. The results are then combined in a second
aggregate.
+ *
+ * TODO Expression cannocalization
+ * TODO Eliminate foldable expressions from distinct clauses.
+ * TODO This eliminates all distinct expressions. We could safely pass one
to the aggregate
+ * operator. Perhaps this is a good thing? It is much simpler to plan
later on...
--- End diff --
Yeah, we can use this path to handle all cases. If I understand correctly,
this rewriting approach will first create two logical Aggregate operators and
then we shuffle data twice. Our current planning rule for a single distinct agg
will shuffle data once, which can be bad if we do not have group by clause
(because we will have a single reducer). To make the ideal decision, we need to
know the statistics of grouping columns and distinct column. However, for the
cases that we have a single distinct column and we do not have a group by
clause, I feel your rewriting approach should be strictly better. What do you
think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]