Github user setjet commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18113#discussion_r153021545
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/typedaggregators.scala
 ---
    @@ -99,3 +94,91 @@ class TypedAverage[IN](val f: IN => Double) extends 
Aggregator[IN, (Double, Long
         toColumn.asInstanceOf[TypedColumn[IN, java.lang.Double]]
       }
     }
    +
    +class TypedMinDouble[IN](val f: IN => Double) extends Aggregator[IN, 
Double, Double] {
    +  override def zero: Double = Double.PositiveInfinity
    +  override def reduce(b: Double, a: IN): Double = math.min(b, f(a))
    +  override def merge(b1: Double, b2: Double): Double = math.min(b1, b2)
    +  override def finish(reduction: Double): Double = {
    +    if (Double.PositiveInfinity == reduction) {
    --- End diff --
    
    That's correct. That was part of the discussion above. We used to init it 
with null, so that we could then distinguish between these cases. As you can 
read above, that initial proposal was tossed as it didnt meet ANSI standards.
    Another option I just realised would be to initialize it with Double.NaN, 
and then use that as a flag to distinguish between infinity and the initial 
value. Then again, that would not be supported for Longs as we cannot assign a 
NaN.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to