kazuyukitanimura commented on code in PR #960:
URL: https://github.com/apache/datafusion-comet/pull/960#discussion_r1773863372


##########
spark/src/main/scala/org/apache/comet/CometSparkSessionExtensions.scala:
##########
@@ -873,12 +873,18 @@ class CometSparkSessionExtensions
           LessThan(normalizeNaNAndZero(left), normalizeNaNAndZero(right))
         case LessThanOrEqual(left, right) =>
           LessThanOrEqual(normalizeNaNAndZero(left), 
normalizeNaNAndZero(right))
+        case Divide(left, right, evalMode) =>
+          Divide(left, normalizeNaNAndZero(right), evalMode)
+        case Remainder(left, right, evalMode) =>
+          Remainder(left, normalizeNaNAndZero(right), evalMode)
       }
     }
 
     def normalizeNaNAndZero(expr: Expression): Expression = {
       expr match {
         case _: KnownFloatingPointNormalized => expr
+        case FloatLiteral(f) if !f.equals(-0.0f) => expr

Review Comment:
   > Just clarifying: this is saying that as long as the literal isn't -0.0, we 
don't have to normalize it? 
   Exactly
   
   > And it doesn't look like there's any way to define a literal NaN, so we 
don't handle that case?
   We can create a literal NaN, but Spark `NormalizeNaNAndZero` is not doing 
much for literal NaN. I think NaN normalization matters only when some 
expression results are NaN 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NormalizeFloatingNumbers.scala#L194



##########
spark/src/main/scala/org/apache/comet/CometSparkSessionExtensions.scala:
##########
@@ -873,12 +873,18 @@ class CometSparkSessionExtensions
           LessThan(normalizeNaNAndZero(left), normalizeNaNAndZero(right))
         case LessThanOrEqual(left, right) =>
           LessThanOrEqual(normalizeNaNAndZero(left), 
normalizeNaNAndZero(right))
+        case Divide(left, right, evalMode) =>
+          Divide(left, normalizeNaNAndZero(right), evalMode)
+        case Remainder(left, right, evalMode) =>
+          Remainder(left, normalizeNaNAndZero(right), evalMode)
       }
     }
 
     def normalizeNaNAndZero(expr: Expression): Expression = {
       expr match {
         case _: KnownFloatingPointNormalized => expr
+        case FloatLiteral(f) if !f.equals(-0.0f) => expr

Review Comment:
   > Just clarifying: this is saying that as long as the literal isn't -0.0, we 
don't have to normalize it? 
   
   Exactly
   
   > And it doesn't look like there's any way to define a literal NaN, so we 
don't handle that case?
   
   We can create a literal NaN, but Spark `NormalizeNaNAndZero` is not doing 
much for literal NaN. I think NaN normalization matters only when some 
expression results are NaN 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NormalizeFloatingNumbers.scala#L194



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to