orbigtuna opened a new issue, #8322:
URL: https://github.com/apache/incubator-gluten/issues/8322

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   I use apache-gluten-1.2.1-incubating-bin-spark35.tar.gz jar file --jars 
testing sparksql application . run   this
   sql 
   
   -- start query 1 in stream 0 using template query2.tpl using seed 1200409435
   with wscs as
    (select sold_date_sk
           ,sales_price
     from  (select ws_sold_date_sk sold_date_sk
                 ,ws_ext_sales_price sales_price
           from web_sales 
           union all
           select cs_sold_date_sk sold_date_sk
                 ,cs_ext_sales_price sales_price
           from catalog_sales) x),
    wswscs as 
    (select d_week_seq,
           sum(case when (d_day_name='Sunday') then sales_price else null end) 
sun_sales,
           sum(case when (d_day_name='Monday') then sales_price else null end) 
mon_sales,
           sum(case when (d_day_name='Tuesday') then sales_price else  null 
end) tue_sales,
           sum(case when (d_day_name='Wednesday') then sales_price else null 
end) wed_sales,
           sum(case when (d_day_name='Thursday') then sales_price else null 
end) thu_sales,
           sum(case when (d_day_name='Friday') then sales_price else null end) 
fri_sales,
           sum(case when (d_day_name='Saturday') then sales_price else null 
end) sat_sales
    from wscs
        ,date_dim
    where d_date_sk = sold_date_sk
    group by d_week_seq)
    select d_week_seq1
          ,round(sun_sales1/sun_sales2,2)
          ,round(mon_sales1/mon_sales2,2)
          ,round(tue_sales1/tue_sales2,2)
          ,round(wed_sales1/wed_sales2,2)
          ,round(thu_sales1/thu_sales2,2)
          ,round(fri_sales1/fri_sales2,2)
          ,round(sat_sales1/sat_sales2,2)
    from
    (select wswscs.d_week_seq d_week_seq1
           ,sun_sales sun_sales1
           ,mon_sales mon_sales1
           ,tue_sales tue_sales1
           ,wed_sales wed_sales1
           ,thu_sales thu_sales1
           ,fri_sales fri_sales1
           ,sat_sales sat_sales1
     from wswscs,date_dim 
     where date_dim.d_week_seq = wswscs.d_week_seq and
           d_year = 2001) y,
    (select wswscs.d_week_seq d_week_seq2
           ,sun_sales sun_sales2
           ,mon_sales mon_sales2
           ,tue_sales tue_sales2
           ,wed_sales wed_sales2
           ,thu_sales thu_sales2
           ,fri_sales fri_sales2
           ,sat_sales sat_sales2
     from wswscs
         ,date_dim 
     where date_dim.d_week_seq = wswscs.d_week_seq and
           d_year = 2001+1) z
    where d_week_seq1=d_week_seq2-53
    order by d_week_seq1;
   
   but appears error below error log  my testing spark version is 
spark-3.5.0-bin-hadoop3 running local mode
   
   ### Spark version
   
   Spark-3.5.x
   
   ### Spark configurations
   
    --master local \
    --conf spark.plugins=org.apache.gluten.GlutenPlugin \
    --conf spark.memory.offHeap.enabled=true \
    --conf spark.memory.offHeap.size=5g \
    --conf 
spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager \
    --jars gluten-velox-bundle-spark3.5_2.12-centos_7_x86_64-1.2.1.jar
   
   ### System information
   
   CentOS Linux release 7.9.2009 (Core)
   
   ### Relevant logs
   
   ```bash
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/spark/sql/catalyst/plans/physical/HashPartitioningLike
                at 
org.apache.spark.sql.execution.ExpandOutputPartitioningShim.expandPartitioning(ExpandOutputPartitioningShim.scala:46)
                at 
org.apache.gluten.execution.HashJoinLikeExecTransformer.expandPartitioning(JoinExecTransformer.scala:203)
                at 
org.apache.gluten.execution.HashJoinLikeExecTransformer.outputPartitioning(JoinExecTransformer.scala:187)
                at 
org.apache.gluten.execution.HashJoinLikeExecTransformer.outputPartitioning$(JoinExecTransformer.scala:174)
                at 
org.apache.gluten.execution.BroadcastHashJoinExecTransformerBase.outputPartitioning(JoinExecTransformer.scala:373)
                at 
org.apache.spark.sql.execution.PartitioningPreservingUnaryExecNode.outputPartitioning(AliasAwareOutputExpression.scala:33)
                at 
org.apache.spark.sql.execution.PartitioningPreservingUnaryExecNode.outputPartitioning$(AliasAwareOutputExpression.scala:31)
                at 
org.apache.gluten.execution.ProjectExecTransformer.outputPartitioning(BasicPhysicalOperatorTransformer.scala:163)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule$.org$apache$gluten$extension$FlushableHashAggregateRule$$isAggInputAlreadyDistributedWithAggKeys(FlushableHashAggregateRule.scala:108)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.$anonfun$replaceEligibleAggregates$1(FlushableHashAggregateRule.scala:65)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.$anonfun$replaceEligibleAggregates$1(FlushableHashAggregateRule.scala:71)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.$anonfun$replaceEligibleAggregates$1(FlushableHashAggregateRule.scala:71)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.org$apache$gluten$extension$FlushableHashAggregateRule$$replaceEligibleAggregates(FlushableHashAggregateRule.scala:74)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule$$anonfun$apply$1.applyOrElse(FlushableHashAggregateRule.scala:41)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule$$anonfun$apply$1.applyOrElse(FlushableHashAggregateRule.scala:34)
                at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUpWithPruning$2(TreeNode.scala:515)
                at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
                at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformUpWithPruning(TreeNode.scala:515)
                at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:488)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.apply(FlushableHashAggregateRule.scala:34)
                at 
org.apache.gluten.extension.FlushableHashAggregateRule.apply(FlushableHashAggregateRule.scala:32)
                at 
org.apache.gluten.extension.columnar.ColumnarRuleApplier$LoggedRule.$anonfun$apply$1(ColumnarRuleApplier.scala:54)
                at 
org.apache.gluten.metrics.GlutenTimeMetric$.withNanoTime(GlutenTimeMetric.scala:41)
                at 
org.apache.gluten.metrics.GlutenTimeMetric$.withMillisTime(GlutenTimeMetric.scala:46)
                at 
org.apache.gluten.extension.columnar.ColumnarRuleApplier$LoggedRule.apply(ColumnarRuleApplier.scala:57)
                at 
org.apache.gluten.extension.columnar.ColumnarRuleApplier$LoggedRule.apply(ColumnarRuleApplier.scala:42)
                at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
                at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
                at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
                at scala.collection.immutable.List.foldLeft(List.scala:91)
                at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
                at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
                at scala.collection.immutable.List.foreach(List.scala:431)
                at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.transformPlan(HeuristicApplier.scala:73)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.$anonfun$withTransformRules$3(HeuristicApplier.scala:54)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.prepareFallback(HeuristicApplier.scala:80)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.$anonfun$withTransformRules$2(HeuristicApplier.scala:53)
                at 
org.apache.gluten.utils.QueryPlanSelector.maybe(QueryPlanSelector.scala:74)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.org$apache$gluten$extension$columnar$heuristic$HeuristicApplier$$$anonfun$withTransformRules$1(HeuristicApplier.scala:51)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier$$anonfun$withTransformRules$8.apply(HeuristicApplier.scala:50)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier$$anonfun$withTransformRules$8.apply(HeuristicApplier.scala:50)
                at 
org.apache.gluten.extension.columnar.heuristic.HeuristicApplier.apply(HeuristicApplier.scala:45)
                at 
org.apache.gluten.extension.ColumnarOverrideRules.org$apache$gluten$extension$ColumnarOverrideRules$$$anonfun$postColumnarTransitions$1(ColumnarOverrides.scala:125)
                at 
org.apache.gluten.extension.ColumnarOverrideRules$$anonfun$postColumnarTransitions$2.apply(ColumnarOverrides.scala:116)
                at 
org.apache.gluten.extension.ColumnarOverrideRules$$anonfun$postColumnarTransitions$2.apply(ColumnarOverrides.scala:116)
                at 
org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.$anonfun$apply$2(Columnar.scala:532)
                at 
org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.$anonfun$apply$2$adapted(Columnar.scala:532)
                at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
                at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
                at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
                at 
org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.apply(Columnar.scala:532)
                at 
org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.apply(Columnar.scala:482)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec$.$anonfun$applyPhysicalRules$2(AdaptiveSparkPlanExec.scala:829)
                at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
                at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
                at scala.collection.immutable.List.foldLeft(List.scala:91)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec$.applyPhysicalRules(AdaptiveSparkPlanExec.scala:828)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.newQueryStage(AdaptiveSparkPlanExec.scala:576)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:522)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:561)
                at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
                at scala.collection.Iterator.foreach(Iterator.scala:943)
                at scala.collection.Iterator.foreach$(Iterator.scala:943)
                at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
                at scala.collection.IterableLike.foreach(IterableLike.scala:74)
                at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
                at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
                at 
scala.collection.TraversableLike.map(TraversableLike.scala:286)
                at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
                at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:561)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:348)
                at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:256)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:401)
                at 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:374)
                at 
org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4344)
                at 
org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:3326)
                at 
org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4334)
                at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
                at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4332)
                at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
                at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
                at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
                at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
                at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
                at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4332)
                at org.apache.spark.sql.Dataset.head(Dataset.scala:3326)
                at org.apache.spark.sql.Dataset.take(Dataset.scala:3549)
                at org.apache.spark.sql.Dataset.getRows(Dataset.scala:280)
                at org.apache.spark.sql.Dataset.showString(Dataset.scala:315)
                at org.apache.spark.sql.Dataset.show(Dataset.scala:839)
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to