Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20472#discussion_r166773606
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
    @@ -1001,11 +996,18 @@ private[spark] object RandomForest extends Logging {
         } else {
           val numSplits = metadata.numSplits(featureIndex)
     
    -      // get count for each distinct value
    -      val (valueCountMap, numSamples) = 
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
    +      // get count for each distinct value except zero value
    +      val (partValueCountMap, partNumSamples) = 
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
    --- End diff --
    
    This bit of (existing) code seems obtuse to me. What about...
    
    ```
    val partNumSamples = featureSamples.size
    val partValueCountMap = scala.collection.mutable.Map[Double,Int]()
    featureSamples.foreach { x =>
      partValueCountMap(x) = partValueCountMap.getOrElse(x, 0) + 1
    }
    ```
      


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to