sunchao commented on a change in pull request #29775:
URL: https://github.com/apache/spark/pull/29775#discussion_r489988859



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/UnwrapCastInBinaryComparison.scala
##########
@@ -103,9 +103,9 @@ object UnwrapCastInBinaryComparison extends 
Rule[LogicalPlan] {
     // In case both sides have integral type, optimize the comparison by 
removing casts or
     // moving cast to the literal side.
     case be @ BinaryComparison(
-      Cast(fromExp, toType: IntegralType, _), Literal(value, literalType))
+      Cast(fromExp, toType: IntegralType, tz), Literal(value, literalType))
         if canImplicitlyCast(fromExp, toType, literalType) =>
-      simplifyIntegralComparison(be, fromExp, toType, value)
+      simplifyIntegralComparison(be, fromExp, toType, value, tz)

Review comment:
       (oops just found out my comment was not sent out successfully)
   
   This is because `ResolveTimeZone` will try to add timezone info to all 
expressions that don't have it during query analysis. However, since the `Cast` 
expr was generated at optimization phase, it will not have the timezone info. 
As result, `PlanTest.comparePlans` will fail because of mismatch. I can try to 
come up with a test if necessary.
   
   On the other hand, I think instead of using `Cast`, we may just directly use 
the value, since the `Cast` will be optimized away by ConstantFolding anyways 
later. What do you think?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to