Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21753#discussion_r202107542
  
    --- Diff: sql/core/src/test/resources/sql-tests/results/pivot.sql.out ---
    @@ -192,3 +192,33 @@ struct<>
     -- !query 12 output
     org.apache.spark.sql.AnalysisException
     cannot resolve '`year`' given input columns: 
[__auto_generated_subquery_name.course, 
__auto_generated_subquery_name.earnings]; line 4 pos 0
    +
    +
    +-- !query 13
    +SELECT * FROM (
    +  SELECT year, course, earnings FROM courseSales
    +)
    +PIVOT (
    +  ceil(sum(earnings)), avg(earnings) + 1 as a1
    +  FOR course IN ('dotNET', 'Java')
    +)
    +-- !query 13 schema
    +struct<year:int,dotNET_CEIL(sum(CAST(earnings AS 
BIGINT))):bigint,dotNET_a1:double,Java_CEIL(sum(CAST(earnings AS 
BIGINT))):bigint,Java_a1:double>
    +-- !query 13 output
    +2012       15000   7501.0  20000   20001.0
    +2013       48000   48001.0 30000   30001.0
    +
    +
    +-- !query 14
    +SELECT * FROM (
    +  SELECT year, course, earnings FROM courseSales
    +)
    +PIVOT (
    +  sum(avg(earnings))
    +  FOR course IN ('dotNET', 'Java')
    +)
    +-- !query 14 schema
    +struct<>
    +-- !query 14 output
    +org.apache.spark.sql.AnalysisException
    +It is not allowed to use an aggregate function in the argument of another 
aggregate function. Please use the inner aggregate function in a sub-query.;
    --- End diff --
    
    > But you reminded me that I might not need to check the aggregate function 
arguments here and leave it to CheckAnalysis since this check is independent of 
the context and always outputs the same error message. 
    
    The general principle in our Analyzer is do the error handling in 
`CheckAnalysis`, unless a better (more readable) error message can be issued 
from the rule. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to