wangyum opened a new pull request #35286:
URL: https://github.com/apache/spark/pull/35286
### What changes were proposed in this pull request?
Use `Aggregate.aggregateExpressions` instead of `Aggregate.output` when
pushing down limit 1 through Aggregate.
For example:
```scala
spark.range(10).selectExpr("id % 5 AS a", "id % 5 AS
b").write.saveAsTable("t1")
spark.sql("SELECT a, b, a AS alias FROM t1 GROUP BY a, b LIMIT
1").explain(true)
```
Before this pr:
```
== Optimized Logical Plan ==
GlobalLimit 1
+- LocalLimit 1
+- !Project [a#227L, b#228L, alias#226L]
+- LocalLimit 1
+- Relation default.t1[a#227L,b#228L] parquet
```
After this pr:
```
== Optimized Logical Plan ==
GlobalLimit 1
+- LocalLimit 1
+- Project [a#227L, b#228L, a#227L AS alias#226L]
+- LocalLimit 1
+- Relation default.t1[a#227L,b#228L] parquet
```
### Why are the changes needed?
Fix bug.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unit test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]