Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9419#discussion_r43968503
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ---
    @@ -232,7 +232,7 @@ class Analyzer(
             // substitute the group by expressions.
             val newGroupByExprs = groupByExprPairs.map(_._2)
    --- End diff --
    
    Oh, sorry, actually there are so big difference as I mentioned.
    
    But I got error when do the query like below, can you please take look at 
it?
    ```scala
    select sum(a+b) as ab from mytable group by a+b, b with rollup;
    15/11/04 17:46:36 ERROR thriftserver.SparkSQLDriver: Failed in [select 
sum(a+b) as ab from mytable group by a+b, b with rollup]
    org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
dataType on unresolved object, tree: '(cast(a#109 as double) + b#110)
        at 
org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.dataType(unresolved.scala:59)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1$$anonfun$5$$anonfun$apply$3.applyOrElse(basicOperators.scala:291)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1$$anonfun$5$$anonfun$apply$3.applyOrElse(basicOperators.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:227)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:227)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:226)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1$$anonfun$5.apply(basicOperators.scala:287)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1$$anonfun$5.apply(basicOperators.scala:287)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1.apply(basicOperators.scala:287)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand$$anonfun$expand$1.apply(basicOperators.scala:283)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand.expand(basicOperators.scala:283)
        at 
org.apache.spark.sql.catalyst.plans.logical.Expand.<init>(basicOperators.scala:254)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveGroupingAnalytics$$anonfun$apply$6.applyOrElse(Analyzer.scala:293)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveGroupingAnalytics$$anonfun$apply$6.applyOrElse(Analyzer.scala:200)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:56)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveGroupingAnalytics$.apply(Analyzer.scala:200)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveGroupingAnalytics$.apply(Analyzer.scala:173)
        at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:83)
        at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:80)
        at 
scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
        at scala.collection.immutable.List.foldLeft(List.scala:84)
        at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:80)
        at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:72)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:72)
        at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:38)
        at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:38)
        at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:36)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:132)
        at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:784)
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to