cloud-fan opened a new pull request, #54557: URL: https://github.com/apache/spark/pull/54557
### What changes were proposed in this pull request? Follow-up to #54501. Two cleanups: 1. **Remove dead error code**: The `windowAggregateFunctionWithFilterNotSupportedError` method in `QueryCompilationErrors.scala` and its `_LEGACY_ERROR_TEMP_1030` error class in `error-conditions.json` were left behind after #54501 removed their only call site. 2. **Fix flaky `first_value`/`last_value` test**: The window filter test used `ORDER BY val_long` with a ROWS frame, but `val_long` has duplicate values in the test data (e.g., three rows with `val_long=1`), making `first_value`/`last_value` results non-deterministic. Added `val` and `cate` as tiebreaker columns and used `NULLS LAST` so the output is both stable and meaningful (without NULLS LAST, the first matching 'a' row has `val=NULL`, making `first_a` always NULL). ### Why are the changes needed? 1. Dead code should be cleaned up. 2. Non-deterministic tests can cause spurious failures. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Re-ran `SQLQueryTestSuite` for `window.sql` — all 4 tests pass across all config dimensions. ### Was this patch authored or co-authored using generative AI tooling? Yes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
