Github user GeorgeDittmar commented on a diff in the pull request:
https://github.com/apache/spark/pull/6112#discussion_r31382691
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala
---
@@ -717,6 +719,49 @@ class SparseVector(
new SparseVector(size, ii, vv)
}
}
+
+ override def argmax: Int = {
+ if (size == 0) {
+ -1
+ } else {
+
+ var maxIdx = 0
+ var maxValue = if(indices(0) != 0) 0.0 else values(0)
+
+ foreachActive { (i, v) =>
+ if (v > maxValue) {
+ maxIdx = i
+ maxValue = v
+ }
+ }
+
+ // look for inactive values incase all active node values are
negative
+ if(size != values.size && maxValue < 0){
+ maxIdx = calcInactiveIdx(indices(0))
+ maxValue = 0
+ }
+ maxIdx
+ }
+ }
+
+ /**
+ * Calculates the first instance of an inactive node in a sparse vector
and returns the Idx
+ * of the element.
+ * @param idx starting index of computation
+ * @return index of first inactive node or -1 if it cannot find one
+ */
+ private[SparseVector] def calcInactiveIdx(idx: Int): Int ={
--- End diff --
Yeah I was going back and forth on if I wanted to pass in a idx param or
not. It would be nice in case we want to say find an inactive value after a
given index but thats probably coding for the future which tends to be messy.
I'll remove it for now and if anyone else has any other opinions we can go from
there.
I dunno I think I am just partial to recursive functions but I can give
yours a try still. Really up for whatever best fits the spark code style etc.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]