andygrove commented on issue #1238:
URL:
https://github.com/apache/datafusion-comet/issues/1238#issuecomment-2578892598
I can reproduce the issue in `main` so I am confused how this is currently
passing when we run Spark SQL tests. Any idea @kazuyukitanimura or
@parthchandra?
Here is my repro:
```scala
test("SPARK-32038: NormalizeFloatingNumbers should work on distinct
aggregate") {
withSQLConf(CometConf.COMET_ENABLED.key -> "false") {
val nan1 = java.lang.Float.intBitsToFloat(0x7f800001)
val nan2 = java.lang.Float.intBitsToFloat(0x7fffffff)
val df = Seq(
("mithunr", Float.NaN),
("mithunr", nan1),
("mithunr", nan2),
("abellina", 1.0f),
("abellina", 2.0f)).toDF("uid", "score")
df.write.mode(SaveMode.Overwrite).parquet("test.parquet")
}
withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> "auto") {
spark.read.parquet("test.parquet").createOrReplaceTempView("view")
val df =
spark.sql("select uid, count(distinct score) from view group by 1
order by 1 asc")
checkSparkAnswer /*AndOperator*/ (df)
}
}
```
Produces:
```
== Results ==
!== Correct Answer - 2 == == Spark Answer - 2 ==
struct<uid:string,count(DISTINCT score):bigint>
struct<uid:string,count(DISTINCT score):bigint>
[abellina,2] [abellina,2]
![mithunr,1] [mithunr,3]
```
The issue is presumably relateds to normalizing `NaN` and differences
between Rust and JVM when comparing `NaN` to other `NaN` numbers.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]