srowen commented on a change in pull request #26596: [SPARK-29959][ML][PYSPARK] 
Summarizer support more metrics
URL: https://github.com/apache/spark/pull/26596#discussion_r350234273
 
 

 ##########
 File path: mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala
 ##########
 @@ -460,21 +486,52 @@ private[ml] object SummaryBuilderImpl extends Logging {
       val realMean = Array.ofDim[Double](n)
       var i = 0
       while (i < n) {
-        realMean(i) = currMean(i) * (weightSum(i) / totalWeightSum)
+        realMean(i) = currMean(i) * (currWeightSum(i) / totalWeightSum)
         i += 1
       }
       Vectors.dense(realMean)
     }
 
+    /**
+     * Sum of each dimension.
+     */
+    def sum: Vector = {
+      require(requestedMetrics.contains(Sum))
+      require(totalWeightSum > 0, s"Nothing has been added to this 
summarizer.")
+
+      val realSum = Array.ofDim[Double](n)
+      var i = 0
+      while (i < n) {
+        realSum(i) = currMean(i) * currWeightSum(i)
+        i += 1
+      }
+      Vectors.dense(realSum)
+    }
+
     /**
      * Unbiased estimate of sample variance of each dimension.
      */
     def variance: Vector = {
       require(requestedMetrics.contains(Variance))
       require(totalWeightSum > 0, s"Nothing has been added to this 
summarizer.")
 
-      val realVariance = Array.ofDim[Double](n)
+      val realVariance = computeVariance
+      Vectors.dense(realVariance)
+    }
+
+    /**
+     * Unbiased estimate of standard deviation of each dimension.
+     */
+    def std: Vector = {
+      require(requestedMetrics.contains(Std))
+      require(totalWeightSum > 0, s"Nothing has been added to this 
summarizer.")
 
+      val realVariance = computeVariance
+      Vectors.dense(realVariance.map(math.sqrt))
+    }
+
+    private def computeVariance: Array[Double] = {
+      val realVariance = Array.ofDim[Double](n)
       val denominator = totalWeightSum - (weightSquareSum / totalWeightSum)
 
 Review comment:
   Whatever this is, I'm not sure why it would be negative, unless the idea is 
simply to account for floating-point imprecision. But if it is the equivalent 
of diving by "n-1" then this seems to be incorrectly computing 'sample 
variance'. I'm not quite sure how sample variance is supposed to work out in 
the weighted case, but, if that's what it's doing it's not correct anyway, I 
think.
   
   It would then be valid to require totalWeightSum > 0, but equally, would be 
valid and consistent with pandas to return NaN I think. But we can keep the 
current behavior.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to