jinchengchenghh opened a new issue, #10963:
URL: https://github.com/apache/incubator-gluten/issues/10963

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   https://github.com/apache/incubator-gluten/pull/10962
   ```
   2025-10-28T16:28:46.7157558Z - SPARK-35955: Aggregate avg should not return 
wrong results for decimal overflow *** FAILED ***
   2025-10-28T16:28:46.7159939Z   org.apache.spark.SparkException: Job aborted 
due to stage failure: Task 0 in stage 279.0 failed 1 times, most recent 
failure: Lost task 0.0 in stage 279.0 (TID 364) (68c6faa8b5ed executor driver): 
java.lang.ArithmeticException: Decimal precision 39 exceeds max precision 38
   2025-10-28T16:28:46.7162228Z         at 
org.apache.spark.sql.errors.QueryExecutionErrors$.decimalPrecisionExceedsMaxPrecisionError(QueryExecutionErrors.scala:1013)
   2025-10-28T16:28:46.7163396Z         at 
org.apache.spark.sql.types.Decimal.set(Decimal.scala:123)
   2025-10-28T16:28:46.7164100Z         at 
org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:578)
   2025-10-28T16:28:46.7164788Z         at 
org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
   2025-10-28T16:28:46.7165652Z         at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:396)
   2025-10-28T16:28:46.7167280Z         at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown
 Source)
   2025-10-28T16:28:46.7168986Z         at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
   2025-10-28T16:28:46.7170319Z         at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   2025-10-28T16:28:46.7171521Z         at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
   2025-10-28T16:28:46.7172520Z         at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
   2025-10-28T16:28:46.7173231Z         at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
   2025-10-28T16:28:46.7174065Z         at 
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1931)
   2025-10-28T16:28:46.7174748Z         at 
org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1274)
   2025-10-28T16:28:46.7175437Z         at 
org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1274)
   2025-10-28T16:28:46.7176366Z         at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2268)
   2025-10-28T16:28:46.7177192Z         at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   2025-10-28T16:28:46.7177902Z         at 
org.apache.spark.scheduler.Task.run(Task.scala:136)
   2025-10-28T16:28:46.7178667Z         at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
   2025-10-28T16:28:46.7179507Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
   2025-10-28T16:28:46.7180279Z         at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
   2025-10-28T16:28:46.7181255Z         at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
   2025-10-28T16:28:46.7182356Z         at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   2025-10-28T16:28:46.7183173Z         at 
java.base/java.lang.Thread.run(Thread.java:833)
   2025-10-28T16:28:46.7183641Z 
   2025-10-28T16:28:46.7183778Z Driver stacktrace:
   2025-10-28T16:28:46.7184504Z   at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
   2025-10-28T16:28:46.7185615Z   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
   2025-10-28T16:28:46.7186841Z   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
   2025-10-28T16:28:46.7187865Z   at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
   2025-10-28T16:28:46.7188779Z   at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
   2025-10-28T16:28:46.7189647Z   at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
   2025-10-28T16:28:46.7190530Z   at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
   2025-10-28T16:28:46.7191567Z   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
   2025-10-28T16:28:46.7192823Z   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
   2025-10-28T16:28:46.7193689Z   at scala.Option.foreach(Option.scala:407)
   2025-10-28T16:28:46.7194117Z   ...
   2025-10-28T16:28:46.7194667Z   Cause: java.lang.ArithmeticException: Decimal 
precision 39 exceeds max precision 38
   2025-10-28T16:28:46.7196197Z   at 
org.apache.spark.sql.errors.QueryExecutionErrors$.decimalPrecisionExceedsMaxPrecisionError(QueryExecutionErrors.scala:1013)
   2025-10-28T16:28:46.7197447Z   at 
org.apache.spark.sql.types.Decimal.set(Decimal.scala:123)
   2025-10-28T16:28:46.7198153Z   at 
org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:578)
   2025-10-28T16:28:46.7198842Z   at 
org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
   2025-10-28T16:28:46.7199687Z   at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:396)
   2025-10-28T16:28:46.7201396Z   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown
 Source)
   2025-10-28T16:28:46.7203065Z   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
   2025-10-28T16:28:46.7204394Z   at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   2025-10-28T16:28:46.7205583Z   at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
   2025-10-28T16:28:46.7206728Z   at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
   2025-10-28T16:28:46.7207275Z   ...
   ```
   
   ### Gluten version
   
   _No response_
   
   ### Spark version
   
   None
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   ```bash
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to