[ 
https://issues.apache.org/jira/browse/SPARK-41793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653238#comment-17653238
 ] 

Gera Shegalov commented on SPARK-41793:
---------------------------------------

Similarly in SQLite
{code}
.header on

create table test_table(a long, b decimal(38,2));
insert into test_table 
values
  ('9223372036854775807', '11342371013783243717493546650944543.47'),
  ('9223372036854775807', '999999999999999999999999999999999999.99');

select * from test_table;

select 
  count(1) over(
    partition by a 
    order by b asc
    range between 10.2345 preceding and 6.7890 following) as cnt_1 
  from 
    test_table;
{code}

yields

{code}
a|b
9223372036854775807|1.13423710137832e+34
9223372036854775807|1.0e+36
cnt_1
1
1
{code}

> Incorrect result for window frames defined by a range clause on large 
> decimals 
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-41793
>                 URL: https://issues.apache.org/jira/browse/SPARK-41793
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.4.0
>            Reporter: Gera Shegalov
>            Priority: Major
>
> Context 
> https://github.com/NVIDIA/spark-rapids/issues/7429#issuecomment-1368040686
> The following windowing query on a simple two-row input should produce two 
> non-empty windows as a result
> {code}
> from pprint import pprint
> data = [
>   ('9223372036854775807', '11342371013783243717493546650944543.47'),
>   ('9223372036854775807', '999999999999999999999999999999999999.99')
> ]
> df1 = spark.createDataFrame(data, 'a STRING, b STRING')
> df2 = df1.select(df1.a.cast('LONG'), df1.b.cast('DECIMAL(38,2)'))
> df2.createOrReplaceTempView('test_table')
> df = sql('''
>   SELECT 
>     COUNT(1) OVER (
>       PARTITION BY a 
>       ORDER BY b ASC 
>       RANGE BETWEEN 10.2345 PRECEDING AND 6.7890 FOLLOWING
>     ) AS CNT_1 
>   FROM 
>     test_table
>   ''')
> res = df.collect()
> df.explain(True)
> pprint(res)
> {code}
> SparkĀ 3.4.0-SNAPSHOT output:
> {code}
> [Row(CNT_1=1), Row(CNT_1=0)]
> {code}
> Spark 3.3.1 output as expected:
> {code}
> Row(CNT_1=1), Row(CNT_1=1)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to