gengliangwang commented on code in PR #39509: URL: https://github.com/apache/spark/pull/39509#discussion_r1067663382
########## sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveGroupByAll.scala: ########## @@ -93,8 +93,9 @@ object ResolveGroupByAll extends Rule[LogicalPlan] { * end of analysis, so we can tell users that we fail to infer the grouping columns. */ def checkAnalysis(operator: LogicalPlan): Unit = operator match { - case a: Aggregate if matchToken(a) => - if (a.aggregateExpressions.exists(_.exists(_.isInstanceOf[Attribute]))) { + case a: Aggregate if a.aggregateExpressions.forall(_.resolved) && matchToken(a) => Review Comment: You are right about the check here. For the grouping expression inference, I think we can refactor it as https://github.com/apache/spark/pull/39523 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org