cloud-fan commented on a change in pull request #35070:
URL: https://github.com/apache/spark/pull/35070#discussion_r776925843
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsPushDownAggregates.java
##########
@@ -22,18 +22,19 @@
/**
* A mix-in interface for {@link ScanBuilder}. Data sources can implement this
interface to
- * push down aggregates. Spark assumes that the data source can't fully
complete the
- * grouping work, and will group the data source output again. For queries like
- * "SELECT min(value) AS m FROM t GROUP BY key", after pushing down the
aggregate
- * to the data source, the data source can still output data with duplicated
keys, which is OK
- * as Spark will do GROUP BY key again. The final query plan can be something
like this:
+ * push down aggregates.
+ * <p>
+ * If the data source can't fully complete the grouping work, then
+ * {@link #supportCompletePushDown()} should return false, and Spark will
group the data source
+ * output again. For queries like "SELECT min(value) AS m FROM t GROUP BY
key", after pushing down
+ * the aggregate to the data source, the data source can still output data
with duplicated keys,
+ * which is OK as Spark will do GROUP BY key again. The final query plan can
be something like this:
* <pre>
- * Aggregate [key#1], [min(min(value)#2) AS m#3]
- * +- RelationV2[key#1, min(value)#2]
+ * Aggregate [key#1], [min(min_value#2) AS m#3]
+ * +- RelationV2[key#1, min_value#2]
Review comment:
It's actually decided by the data source, and I pick `min_value` to make
it more readable.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]