psvri opened a new issue, #15069:
URL: https://github.com/apache/iceberg/issues/15069

   ### Apache Iceberg version
   
   1.10.1 (latest release)
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   Hello,
   
   I observed that icebergs aggregate pushdown is giving wrong result when a 
column contains NaN. The below pyspark snippet shows the same.
   
   ```
   spark = SparkSession.builder \
       .appName("Example") \
       .config("spark.sql.extensions", 
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
       .config("spark.sql.catalog.spark_catalog", 
"org.apache.iceberg.spark.SparkCatalog") \
       .config("spark.sql.catalog.spark_catalog.type", "hadoop") \
       .config("spark.sql.catalog.spark_catalog.warehouse", 
"/tmp/iceberg_warehouse") \
       .config("spark.jars.packages", 
"org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.10.1") \
       .getOrCreate()
   
   spark.sql("CREATE TABLE random_table (id INT, value FLOAT) USING iceberg")
   ```
   
   With pushdown
   ```
   spark.conf.set("spark.sql.iceberg.aggregate-push-down.enabled", "true")
   spark.sql("select max(value) from random_table").explain()
   spark.sql("select max(value) from random_table").show()
   
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[], functions=[max(agg_func_0#504)])
      +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=557]
         +- HashAggregate(keys=[], functions=[partial_max(agg_func_0#504)])
            +- Project [max(value)#505 AS agg_func_0#504]
               +- LocalTableScan [max(value)#505]
   
   
   +----------+
   |max(value)|
   +----------+
   |0.66714805|
   +----------+
   ```
   
   Without pushdown
   ```
   spark.conf.set("spark.sql.iceberg.aggregate-push-down.enabled", "false")
   spark.sql("select max(value) from random_table").explain()
   spark.sql("select max(value) from random_table").show()
   
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[], functions=[max(value#531)])
      +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=601]
         +- HashAggregate(keys=[], functions=[partial_max(value#531)])
            +- BatchScan spark_catalog.default.random_table[value#531] 
spark_catalog.default.random_table (branch=null) [filters=, groupedBy=] 
RuntimeFilters: []
   
   
   +----------+
   |max(value)|
   +----------+
   |       NaN|
   +----------+
   ```
   
   ### Willingness to contribute
   
   - [x] I can contribute a fix for this bug independently
   - [ ] I would be willing to contribute a fix for this bug with guidance from 
the Iceberg community
   - [ ] I cannot contribute a fix for this bug at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to