Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20472#discussion_r165671106
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
@@ -1001,11 +1002,22 @@ private[spark] object RandomForest extends Logging {
} else {
val numSplits = metadata.numSplits(featureIndex)
- // get count for each distinct value
- val (valueCountMap, numSamples) =
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
+ // get count for each distinct value except zero value
+ val (partValueCountMap, partNumSamples) =
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
case ((m, cnt), x) =>
(m + ((x, m.getOrElse(x, 0) + 1)), cnt + 1)
}
+
+ // Calculate the number of samples for finding splits
+ var requiredSamples: Long = math.max(metadata.maxBins *
metadata.maxBins, 10000)
--- End diff --
This logic is somewhat copied from another method. Can we create new
method? Or pass the result through the methods?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]