srowen commented on a change in pull request #26596: [SPARK-29959][ML][PYSPARK]
Summarizer support more metrics
URL: https://github.com/apache/spark/pull/26596#discussion_r350852859
##########
File path: mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala
##########
@@ -460,21 +486,52 @@ private[ml] object SummaryBuilderImpl extends Logging {
val realMean = Array.ofDim[Double](n)
var i = 0
while (i < n) {
- realMean(i) = currMean(i) * (weightSum(i) / totalWeightSum)
+ realMean(i) = currMean(i) * (currWeightSum(i) / totalWeightSum)
i += 1
}
Vectors.dense(realMean)
}
+ /**
+ * Sum of each dimension.
+ */
+ def sum: Vector = {
+ require(requestedMetrics.contains(Sum))
+ require(totalWeightSum > 0, s"Nothing has been added to this
summarizer.")
+
+ val realSum = Array.ofDim[Double](n)
+ var i = 0
+ while (i < n) {
+ realSum(i) = currMean(i) * currWeightSum(i)
+ i += 1
+ }
+ Vectors.dense(realSum)
+ }
+
/**
* Unbiased estimate of sample variance of each dimension.
Review comment:
Wait, you are right. pandas is computing sample variance/stddev. There are
"var_samp" and "var_pop" functions in DBs, including Spark, but "variance" is
an alias for "var_samp". Hm, I kind of disagree with that conceptually, but
that's what we should follow.
The current implementation appears to return the 'sample' variance. For
inputs 1,2,3 it returns 1, which is what you'd get dividing by 3-1. OK, well it
appears to be doing what we intend.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]