AngersZhuuuu opened a new pull request #31402:
URL: https://github.com/apache/spark/pull/31402


   
   ### What changes were proposed in this pull request?
   ```
   CREATE TABLE test_unboundedpreceding(SELLER_ID INT) using parquet;
   
   SELECT
     DENSE_RANK() OVER (
       ORDER BY SELLER_ID ROWS BETWEEN 10 PRECEDING
       AND CURRENT ROW
     ) AS SELLER_RANK
   FROM
     test_unboundedpreceding
   ```
   It will throw:
   
   Error: Error running query: org.apache.spark.sql.AnalysisException: Window 
Frame specifiedwindowframe(RowFrame, -10, currentrow$()) must match the 
required frame specifiedwindowframe(RowFrame, unboundedpreceding$(), 
currentrow$()); (state=,code=0)
   Related code:
   
https://github.com/apache/spark/blob/cde697a479a2f67c6bc4281f39a1ab2ff6a9d17d/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala#L514
   
   PostgresSQL
   ```
   postgres@d40c1bcc8f5a:~$ psql
   psql (11.3 (Debian 11.3-1.pgdg90+1))
   Type "help" for help.
   
   postgres=# create table test_unboundedpreceding(SELLER_ID int);
   CREATE TABLE
   postgres=#     SELECT
   postgres-#       DENSE_RANK() OVER (
   postgres(#         ORDER BY SELLER_ID ROWS BETWEEN 10 PRECEDING
   postgres(#         AND CURRENT ROW
   postgres(#       ) AS SELLER_RANK
   postgres-#     FROM
   postgres-#       test_unboundedpreceding;
    seller_rank
   -------------
   (0 rows)
   ```
   
   ### Why are the changes needed?
   Keep same with PostgresSQL
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   Added UT
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to