[GitHub] spark pull request: [MLLIB] SPARK-4362: Added classProbabilities m...
Github user alanctgardner commented on a diff in the pull request: https://github.com/apache/spark/pull/3626#discussion_r25172516 --- Diff: mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala --- @@ -65,6 +66,25 @@ class NaiveBayesModel private[mllib] ( override def predict(testData: Vector): Double = { labels(brzArgmax(brzPi + brzTheta * testData.toBreeze)) } + + def classProbabilities(testData: RDD[Vector]): --- End diff -- Sorry for the delay, I have no strong preference but predictProbabilities makes sense for consistency. I can make that change and the style ones mentioned. My stats background is not super-strong, @jatinpreet seemed to imply there's a correctness issue with this PR. Can anyone comment on if I've got the math wrong? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [MLLIB] SPARK-4362: Added classProbabilities m...
GitHub user alanctgardner opened a pull request: https://github.com/apache/spark/pull/3626 [MLLIB] SPARK-4362: Added classProbabilities method for Naive Bayes Added methods which accept an RDD or array and return a map of (label - posterior prob.) for each input set rather than only returning the key with the maximum value. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alanctgardner/spark nb-posterior Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/3626.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3626 commit da4fddaeaa6a4c72e6024db1df1ff7d1a356ff90 Author: Alan Gardner alanctgard...@gmail.com Date: 2014-12-05T21:10:31Z Added classProbabilities method --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [MLLIB] SPARK-4362: Added classProbabilities m...
Github user alanctgardner commented on a diff in the pull request: https://github.com/apache/spark/pull/3626#discussion_r21402907 --- Diff: mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala --- @@ -65,6 +65,24 @@ class NaiveBayesModel private[mllib] ( override def predict(testData: Vector): Double = { labels(brzArgmax(brzPi + brzTheta * testData.toBreeze)) } + + def classProbabilities(testData: RDD[Vector]): + RDD[scala.collection.mutable.Map[Double, Double]] = { +val bcModel = testData.context.broadcast(this) +testData.mapPartitions { iter = + val model = bcModel.value + iter.map(model.classProbabilities) +} + } + + def classProbabilities(testData: Vector): scala.collection.mutable.Map[Double, Double] = { --- End diff -- Scala newbie. I couldn't find a better pattern to build the map than mutating it in the foreach. Should I just build a map then make it immutable for returning? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org