cloud-fan commented on a change in pull request #34729:
URL: https://github.com/apache/spark/pull/34729#discussion_r767446855



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
##########
@@ -249,9 +249,9 @@ case class Cbrt(child: Expression) extends 
UnaryMathExpression(math.cbrt, "CBRT"
   """,
   since = "1.4.0",
   group = "math_funcs")
-case class Ceil(child: Expression) extends UnaryMathExpression(math.ceil, 
"CEIL") {
+ case class Ceil(child: Expression) extends UnaryMathExpression(math.ceil, 
"CEIL") {
   override def dataType: DataType = child.dataType match {

Review comment:
       Let's define the return type we want:
   1. if input is decimal type, we should follow `RoundBase` to define the 
return type
   2. if input is integral type, returning `LongType` as before should be good.
   3. if input is float/double, returning `LongType` is definitely wrong. I 
think returning `DoubleType` should be good.
   
   The problem is what to do if the `scala` parameter is not given. Shall we 
keep backward compatibility and use the same return type as before? Or we 
prefer consistency within the system and change the return type?
   
   It looks weird if `ceil(c_double)` and `ceil(c_double, 0)` have different 
data type. Another idea is to only accept integer constant as the scale 
parameter, then we can make `ceil` return long type for float/double input if 
`scale <= 0`.
   
   cc @maropu @viirya @srielau 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to