Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11982#discussion_r57529477
  
    --- Diff: core/src/main/scala/org/apache/spark/partial/SumEvaluator.scala 
---
    @@ -56,9 +56,12 @@ private[spark] class SumEvaluator(totalOutputs: Int, 
confidence: Double)
           val confFactor = {
             if (counter.count > 100) {
               new NormalDistribution().inverseCumulativeProbability(1 - (1 - 
confidence) / 2)
    -        } else {
    +        } else if (counter.count > 1) {
               val degreesOfFreedom = (counter.count - 1).toInt
               new 
TDistribution(degreesOfFreedom).inverseCumulativeProbability(1 - (1 - 
confidence) / 2)
    +        } else {
    +          // No way to meaningfully estimate confidence, so we signal no 
particular confidence interval
    +          Double.PositiveInfinity
    --- End diff --
    
    But the relevant computation is:
    ```
    scala> Double.NaN * Double.PositiveInfinity
    res1: Double = NaN
    ```
    
    The interval should not be `NaN`.
    
    If count is 1 then the sample variance is NaN along with everything else 
that's a function of a second moment. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to