Github user acidghost commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6761#discussion_r32519282
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala ---
    @@ -113,6 +106,55 @@ class NaiveBayesModel private[mllib] (
         }
       }
     
    +  def predictProbabilities(testData: RDD[Vector]): RDD[Map[Double, 
Double]] = {
    +    val bcModel = testData.context.broadcast(this)
    +    testData.mapPartitions { iter =>
    +      val model = bcModel.value
    +      iter.map(model.predictProbabilities)
    +    }
    +  }
    +
    +  def predictProbabilities(testData: Vector): Map[Double, Double] = {
    +    modelType match {
    +      case Multinomial =>
    +        val prob = multinomialCalculation(testData)
    +        posteriorProbabilities(prob)
    +      case Bernoulli =>
    +        val prob = bernoulliCalculation(testData)
    +        posteriorProbabilities(prob)
    +      case _ =>
    +        // This should never happen.
    +        throw new UnknownError(s"Invalid modelType: $modelType.")
    +    }
    +  }
    +
    +  protected[classification] def multinomialCalculation(testData: Vector): 
DenseVector = {
    +    val prob = thetaMatrix.multiply(testData)
    +    BLAS.axpy(1.0, piVector, prob)
    +    prob
    +  }
    +
    +  protected[classification] def bernoulliCalculation(testData: Vector): 
DenseVector = {
    +    testData.foreachActive { (index, value) =>
    +      if (value != 0.0 && value != 1.0) {
    +        throw new SparkException(
    +          s"Bernoulli naive Bayes requires 0 or 1 feature values but found 
$testData.")
    +      }
    +    }
    +    val prob = thetaMinusNegTheta.get.multiply(testData)
    +    BLAS.axpy(1.0, piVector, prob)
    +    BLAS.axpy(1.0, negThetaSum.get, prob)
    +    prob
    +  }
    +
    +  protected[classification] def posteriorProbabilities(prob: DenseVector): 
Map[Double, Double] = {
    +    val maxLogs = max(prob.toBreeze)
    +    val minLogs = min(prob.toBreeze)
    +    val normalized = prob.toArray.map(e => (e - minLogs) / (maxLogs - 
minLogs))
    --- End diff --
    
    I think that your formulation is wrong. The problem is that scaling the log 
probs with an addition / subtraction makes the log-probs diverge a lot more 
than they were and the relative scale among them is lost. That's why trying 
your formula gives me probabilities like 0.01, 0.01 and 0.98 which are 
meaningless.
    
    For example I have the following data:
    ```
    probArray: [
                -1320.8943009394911,
                -1169.3873709544946,
                -1393.4132748832342
            ]
    
    probabilities: [
                1.589923898662823e-66,
                1,
                5.090800995460939e-98
            ]
    probabilities.map(_ / probSum) = 1.589923898662823e-66, 1, 
5.090800995460939e-98
    ```
    
    Dividing / multiplying the log probabilities is the only way to maintain 
the probabilities scale because they will be scaled by the same multiplicative 
factor.
    
    My final code is the following
    ```scala
    private def posteriorProbabilities(prob: DenseVector): Map[Double, Double] 
= {
        val probArray = prob.toArray
        val maxLog = probArray.max
        val probabilities = probArray.map(lp => math.exp(lp / math.abs(maxLog)))
        val probSum = probabilities.sum
        labels.zip(probabilities.map(_ / probSum)).toMap
      }
    ```
    
    Testing gives me the following:
    ```
    probArray: [
                -1320.8943009394911,
                -1169.3873709544946,
                -1393.4132748832342
            ]
    
    probabilities: [
                0.32486541947403913,
                0.3698035598001161,
                0.30533102072584467
            ]
    probabilities.map(_ / probSum) = 0.32317511772096846, 0.36787944117144233, 
0.30374235807152056
    ```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to