wenfang6 opened a new issue, #6790:
URL: https://github.com/apache/incubator-gluten/issues/6790
### Backend
VL (Velox)
### Bug description
when spark.sql.adaptive.enabled=true, run tpcds q57.sql failed.
error:
```
java.lang.IllegalStateException: Internal Error class
org.apache.spark.sql.execution.ColumnarBroadcastExchangeExec has column support
mismatch:
ColumnarBroadcastExchange HashedRelationBroadcastMode(List(input[0, string,
true], input[1, string, true], input[2, string, true], (input[4, int, false] +
1)),false), [id=#2487]
+- ^(17) ProjectExecTransformer [i_category#421, i_brand#417, cc_name#499,
sum_sales#10 AS sum_sales#149, rn#528]
+- ^(17) WindowExecTransformer [rank(d_year#471, d_moy#473)
windowspecdefinition(i_category#421, i_brand#417, cc_name#499, d_year#471 ASC
NULLS FIRST, d_moy#473 ASC NULLS FIRST, specifiedwindowframe(RowFrame,
unboundedpreceding$(), currentrow$())) AS rn#528], [i_category#421,
i_brand#417, cc_name#499], [d_year#471 ASC NULLS FIRST, d_moy#473 ASC NULLS
FIRST]
+- ^(17) SortExecTransformer [i_category#421 ASC NULLS FIRST,
i_brand#417 ASC NULLS FIRST, cc_name#499 ASC NULLS FIRST, d_year#471 ASC NULLS
FIRST, d_moy#473 ASC NULLS FIRST], false, 0
+- ^(17) InputIteratorTransformer[i_category#421, i_brand#417,
cc_name#499, d_year#471, d_moy#473, sum_sales#10]
+- AQEShuffleRead coalesced
+- ShuffleQueryStage 21
+- ColumnarExchange hashpartitioning(i_category#421,
i_brand#417, cc_name#499, 400), ENSURE_REQUIREMENTS, [i_category#421,
i_brand#417, cc_name#499, d_year#471, d_moy#473, sum_sales#10], [id=#2080],
[id=#2080], [OUTPUT] List(i_category:StringType, i_brand:StringType,
cc_name:StringType, d_year:IntegerType, d_moy:IntegerType,
sum_sales:DecimalType(17,2)), [OUTPUT] List(i_category:StringType,
i_brand:StringType, cc_name:StringType, d_year:IntegerType, d_moy:IntegerType,
sum_sales:DecimalType(17,2))
+- VeloxAppendBatches 3276
+- ^(14) ProjectExecTransformer
[hash(i_category#421, i_brand#417, cc_name#499, 42) AS hash_partition_key#810,
i_category#421, i_brand#417, cc_name#499, d_year#471, d_moy#473,
MakeDecimal(sum(UnscaledValue(cs_sales_price#451))#132L,17,2) AS sum_sales#10]
+- ^(14)
HashAggregateTransformer(keys=[i_category#421, i_brand#417, cc_name#499,
d_year#471, d_moy#473], functions=[sum(UnscaledValue(cs_sales_price#451))],
output=[i_category#421, i_brand#417, cc_name#499, d_year#471, d_moy#473,
sum(UnscaledValue(cs_sales_price#451))#132L])
+- ^(14)
InputIteratorTransformer[i_category#421, i_brand#417, cc_name#499, d_year#471,
d_moy#473, sum#652L]
+- ShuffleQueryStage 17
+- ReusedExchange [i_category#421,
i_brand#417, cc_name#499, d_year#471, d_moy#473, sum#652L], ColumnarExchange
hashpartitioning(i_category#29, i_brand#25, cc_name#107, d_year#79, d_moy#81,
400), ENSURE_REQUIREMENTS, [i_category#29, i_brand#25, cc_name#107, d_year#79,
d_moy#81, sum#650L], [id=#1298], [id=#1298], [OUTPUT]
List(i_category:StringType, i_brand:StringType, cc_name:StringType,
d_year:IntegerType, d_moy:IntegerType, sum:LongType)
at
org.apache.spark.sql.execution.SparkPlan.doExecuteColumnar(SparkPlan.scala:313)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:212)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at
org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:208)
at
org.apache.spark.sql.execution.exchange.ReusedExchangeExec.doExecuteColumnar(Exchange.scala:61)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:212)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at
org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:208)
at
org.apache.spark.sql.execution.adaptive.QueryStageExec.doExecuteColumnar(QueryStageExec.scala:121)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:212)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at
org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:208)
at
org.apache.gluten.backendsapi.velox.VeloxSparkPlanExecApi.createBroadcastRelation(VeloxSparkPlanExecApi.scala:626)
at
org.apache.spark.sql.execution.ColumnarBroadcastExchangeExec.$anonfun$relationFuture$2(ColumnarBroadcastExchangeExec.scala:77)
at org.apache.gluten.utils.Arm$.withResource(Arm.scala:25)
at
org.apache.gluten.metrics.GlutenTimeMetric$.millis(GlutenTimeMetric.scala:37)
at
org.apache.spark.sql.execution.ColumnarBroadcastExchangeExec.$anonfun$relationFuture$1(ColumnarBroadcastExchangeExec.scala:65)
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:185)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
### Spark version
Spark-3.2.x
### Spark configurations
--conf spark.sql.adaptive.enabled=true
### System information
_No response_
### Relevant logs
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]