amaliujia commented on a change in pull request #35404:
URL: https://github.com/apache/spark/pull/35404#discussion_r803988721



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -4249,7 +4250,30 @@ object ApplyCharTypePadding extends Rule[LogicalPlan] {
  * rule right after the main resolution batch.
  */
 object RemoveTempResolvedColumn extends Rule[LogicalPlan] {
-  override def apply(plan: LogicalPlan): LogicalPlan = plan.resolveExpressions 
{
-    case t: TempResolvedColumn => UnresolvedAttribute(t.nameParts)
+  override def apply(plan: LogicalPlan): LogicalPlan = {
+    plan.foreachUp {
+      // Having clause could be resolved as a Filter. When having func(column 
with wrong data type),
+      // the column could be wrapped by a TempResolvedColumn, e.g. 
mean(tempresolvedcolumn(t.c)).
+      // Because TempResolvedColumn can still preserve column data type, here 
is a chance to check
+      // if the data type matches with the required data type of the function. 
We can throw an error
+      // when data types mismatches.
+      case operator: Filter =>
+        operator.expressions.foreach(_.foreachUp {
+          case e: Expression if e.checkInputDataTypes().isFailure =>

Review comment:
       @cloud-fan 
   
   thank you! `e.childrenResolved` is a handy call and it indeeds solves 
problem!
   
   I am still checking error at `RemoveTempResolvedColumn`. If you actually 
prefer to check it at `CheckAnalysis`, let me know and I can make a change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to