loneylee opened a new issue, #8354:
URL: https://github.com/apache/incubator-gluten/issues/8354

   ### Backend
   
   CH (ClickHouse)
   
   ### Bug description
   
   Run sql as follow:
   ```
   SELECT aa.project, Sum(aa.c1), Sum(aa.c2)
        , Sum(aa.c3)
   FROM (
        SELECT 'PROJECT' AS project
                , Sum(CASE
                        WHEN string_field = 'b' THEN 0
                        ELSE int_field
                END) AS c1
                , Sum(CASE
                        WHEN string_field = 'b' THEN int_field
                        ELSE 0
                END) AS c2
                , CASE
                        WHEN string_field = 'b' THEN Sum(int_field)
                        ELSE 0
                END AS c3
        FROM $txt_table_name
        GROUP BY  string_field
   ) aa
   GROUP BY aa.project
   LIMIT 500
   ```
   With error:
   ```
   15:54:46.289 WARN org.apache.gluten.execution.ProjectExecTransformer: 
Validation failed with exception for plan: ProjectExecTransformer, due to: 
Failed to bind reference for (string_field#819 = b)#847: Couldn't find 
(string_field#819 = b)#847 in [string_field#819,sum(CASE WHEN (string_field#819 
= b)#847 THEN 0 ELSE int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = 
b)#847 THEN int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L]
   15:54:46.330 WARN org.apache.spark.sql.execution.GlutenFallbackReporter: 
Validation failed for plan: Project, due to: Failed to bind reference for 
(string_field#819 = b)#847: Couldn't find (string_field#819 = b)#847 in 
[string_field#819,sum(CASE WHEN (string_field#819 = b)#847 THEN 0 ELSE 
int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = b)#847 THEN 
int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L]; Failed to bind 
reference for (string_field#819 = b)#847: Couldn't find (string_field#819 = 
b)#847 in [string_field#819,sum(CASE WHEN (string_field#819 = b)#847 THEN 0 
ELSE int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = b)#847 THEN 
int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L].
   15:54:46.383 WARN org.apache.gluten.execution.ProjectExecTransformer: 
Validation failed with exception for plan: ProjectExecTransformer, due to: 
Failed to bind reference for (string_field#819 = b)#880: Couldn't find 
(string_field#819 = b)#880 in [string_field#819,sum(CASE WHEN (string_field#819 
= b)#880 THEN 0 ELSE int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = 
b)#880 THEN int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L]
   15:54:46.402 WARN org.apache.spark.sql.execution.GlutenFallbackReporter: 
Validation failed for plan: Project[QueryId=28], due to: Failed to bind 
reference for (string_field#819 = b)#880: Couldn't find (string_field#819 = 
b)#880 in [string_field#819,sum(CASE WHEN (string_field#819 = b)#880 THEN 0 
ELSE int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = b)#880 THEN 
int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L]; Failed to bind 
reference for (string_field#819 = b)#880: Couldn't find (string_field#819 = 
b)#880 in [string_field#819,sum(CASE WHEN (string_field#819 = b)#880 THEN 0 
ELSE int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = b)#880 THEN 
int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L].
   
   
   Couldn't find (string_field#819 = b)#880 in [string_field#819,sum(CASE WHEN 
(string_field#819 = b)#880 THEN 0 ELSE int_field#820 END)#834L,sum(CASE WHEN 
(string_field#819 = b)#880 THEN int_field#820 ELSE 0 
END)#835L,sum(int_field#820)#836L]
   java.lang.IllegalStateException: Couldn't find (string_field#819 = b)#880 in 
[string_field#819,sum(CASE WHEN (string_field#819 = b)#880 THEN 0 ELSE 
int_field#820 END)#834L,sum(CASE WHEN (string_field#819 = b)#880 THEN 
int_field#820 ELSE 0 END)#835L,sum(int_field#820)#836L]
        at 
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:80)
        at 
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:73)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:589)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at scala.collection.TraversableLike.map(TraversableLike.scala:286)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:698)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:589)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:589)
        at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1228)
        at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1227)
        at 
org.apache.spark.sql.catalyst.expressions.UnaryExpression.mapChildren(Expression.scala:498)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:589)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:528)
        at 
org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:73)
        at 
org.apache.spark.sql.catalyst.expressions.BindReferences$.$anonfun$bindReferences$1(BoundAttribute.scala:94)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at scala.collection.TraversableLike.map(TraversableLike.scala:286)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at 
org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReferences(BoundAttribute.scala:94)
        at 
org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:69)
        at 
org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:196)
        at 
org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:151)
        at 
org.apache.spark.sql.execution.InputAdapter.consume(WholeStageCodegenExec.scala:498)
        at 
org.apache.spark.sql.execution.InputRDDCodegen.doProduce(WholeStageCodegenExec.scala:485)
        at 
org.apache.spark.sql.execution.InputRDDCodegen.doProduce$(WholeStageCodegenExec.scala:458)
        at 
org.apache.spark.sql.execution.InputAdapter.doProduce(WholeStageCodegenExec.scala:498)
        at 
org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:97)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
        at 
org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:92)
   ```
   
   ### Spark version
   
   None
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to