dcoliversun opened a new issue, #9542:
URL: https://github.com/apache/incubator-gluten/issues/9542

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   Two problems:  
   * Velox support `round` but Gluten couldn't push into native
   * No reason for fallback
   
   Test case:  
   ```scala
   test("round") {
       withTempPath {
         path =>
           Seq[Integer](
             25,
             36
           )
             .toDF("value")
             .write
             .parquet(path.getCanonicalPath)
   
           
spark.read.parquet(path.getCanonicalPath).createOrReplaceTempView("view")
   
           runQueryAndCompare("SELECT round(value, -1) from view") {
             checkGlutenOperatorMatch[ProjectExecTransformer]
           }
   
           sql("SELECT round(value, -1) from view").show()
       }
     }
   ```
   
   The log is as follow:  
   ```plain
   16:05:46.980 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
   
   W20250507 16:05:56.172621 1131222 MemoryArbitrator.cpp:84] Query memory 
capacity[460.50MB] is set for NOOP arbitrator which has no capacity enforcement
   E20250507 16:06:37.676007 1131695 Exceptions.h:66] Line: 
/opt/gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:2068, 
Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   
   E20250507 16:06:46.796408 1131695 Exceptions.h:66] Line: 
/opt/gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:2068, 
Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   16:06:53.176 WARN org.apache.spark.sql.execution.GlutenFallbackReporter: 
Validation failed for plan: Project, due to: 
    - Native validation failed: 
      |- 
   
   16:06:54.658 WARN org.apache.spark.sql.execution.GlutenFallbackReporter: 
Validation failed for plan: Project[QueryId=13], due to: 
    - Native validation failed: 
      |- 
   
   
   
   executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => 
tag.runtimeClass.isInstance(plan))) was false Expect ProjectExecTransformer 
exists in executedPlan:
    Project [round(value#144, -1) AS round(value, -1)#151]
   +- VeloxColumnarToRow
      +- ^(3) BatchScanTransformer parquet 
file:/tmp/spark-c676d58b-b13a-43f2-82e6-cbdb3abcf21b[value#144] ParquetScan 
DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 
paths)[file:/tmp/spark-c676d58b-b13a-43f2-82e6-cbdb3abcf21b], PartitionFilters: 
[], PushedAggregation: [], PushedFilters: [], PushedGroupBy: [], ReadSchema: 
struct<value:int> RuntimeFilters: [] NativeFilters: []
   
   ScalaTestFailureLocation: org.apache.spark.sql.GlutenQueryTest at 
(GlutenQueryTest.scala:421)
   org.scalatest.exceptions.TestFailedException: executedPlan.exists(((plan: 
org.apache.spark.sql.execution.SparkPlan) => 
tag.runtimeClass.isInstance(plan))) was false Expect ProjectExecTransformer 
exists in executedPlan:
    Project [round(value#144, -1) AS round(value, -1)#151]
   +- VeloxColumnarToRow
      +- ^(3) BatchScanTransformer parquet 
file:/tmp/spark-c676d58b-b13a-43f2-82e6-cbdb3abcf21b[value#144] ParquetScan 
DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 
paths)[file:/tmp/spark-c676d58b-b13a-43f2-82e6-cbdb3abcf21b], PartitionFilters: 
[], PushedAggregation: [], PushedFilters: [], PushedGroupBy: [], ReadSchema: 
struct<value:int> RuntimeFilters: [] NativeFilters: []
   
   ```
   
   ### Gluten version
   
   Gluten-1.4
   
   ### Spark version
   
   Spark-3.5.x
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   ```bash
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to