wangyum opened a new pull request, #48789:
URL: https://github.com/apache/spark/pull/48789
### What changes were proposed in this pull request?
This PR adds a new check to keep the output order after AQE optimization.
For example:
```sql
SELECT year, course, earnings, SUM(earnings) OVER (ORDER BY year, course) AS
balance
FROM t ORDER BY year, course
LIMIT 100
```
The initial plan is:
```
TakeOrderedAndProject(limit=100, orderBy=[year#282 ASC NULLS
FIRST,course#281 ASC NULLS FIRST],
output=[year#282,course#281,earnings#283,balance#280])
+- Window [sum(earnings#283) windowspecdefinition(year#282 ASC NULLS FIRST,
course#281 ASC NULLS FIRST, specifiedwindowframe(RangeFrame,
unboundedpreceding$(), currentrow$())) AS balance#280], [year#282 ASC NULLS
FIRST, course#281 ASC NULLS FIRST]
+- Sort [year#282 ASC NULLS FIRST, course#281 ASC NULLS FIRST], false, 0
+- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=120]
+- FileScan parquet
spark_catalog.default.t[course#281,year#282,earnings#283]
```
After AQE optimization:
```
Window [sum(earnings#283) windowspecdefinition(year#282 ASC NULLS FIRST,
course#281 ASC NULLS FIRST, specifiedwindowframe(RangeFrame,
unboundedpreceding$(), currentrow$())) AS balance#280], [year#282 ASC NULLS
FIRST, course#281 ASC NULLS FIRST]
+- *(2) Sort [year#282 ASC NULLS FIRST, course#281 ASC NULLS FIRST], false, 0
+- ShuffleQueryStage 0
+- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=131]
+- *(1) ColumnarToRow
+- FileScan parquet
spark_catalog.default.t[course#281,year#282,earnings#283]
```
After this PR:
```
*(3) Project [year#282, course#281, earnings#283, balance#280]
+- Window [sum(earnings#283) windowspecdefinition(year#282 ASC NULLS FIRST,
course#281 ASC NULLS FIRST, specifiedwindowframe(RangeFrame,
unboundedpreceding$(), currentrow$())) AS balance#280], [year#282 ASC NULLS
FIRST, course#281 ASC NULLS FIRST]
+- *(2) Sort [year#282 ASC NULLS FIRST, course#281 ASC NULLS FIRST],
false, 0
+- ShuffleQueryStage 0
+- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=131]
+- *(1) ColumnarToRow
+- FileScan parquet
spark_catalog.default.t[course#281,year#282,earnings#283]
```
### Why are the changes needed?
Fix potential data issue and avoid Spark Driver crash:
```
# more hs_err_pid193136.log
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f9d14841bc0, pid=193136, tid=223205
#
# JRE version: OpenJDK Runtime Environment Zulu17.36+18-SA (17.0.4.1+1)
(build 17.0.4.1+1-LTS)
# Java VM: OpenJDK 64-Bit Server VM Zulu17.36+18-SA (17.0.4.1+1-LTS, mixed
mode, sharing, tiered, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# v ~StubRoutines::jint_disjoint_arraycopy_avx3
#
# Core dump will be written. Default location:
/apache/spark-release/3.5.0-20241105/spark/core.193136
...
```
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unit test.
### Was this patch authored or co-authored using generative AI tooling?
No.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]