Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/20632#discussion_r169739059
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/Node.scala ---
@@ -287,6 +291,34 @@ private[tree] class LearningNode(
}
}
+ /**
+ * @return true iff the node is a leaf.
+ */
+ private def isLeafNode(): Boolean = leftChild.isEmpty &&
rightChild.isEmpty
+
+ // the set of (leaf) predictions appearing in the subtree rooted at the
given node.
+ private lazy val leafPredictions: Set[Double] = {
--- End diff --
Oh right, you call `toNode` in here. My doubt there was that `toNode` gets
called on each node to turn learning nodes into final ones. Wouldn't this
recompute the subtrees over and over? I might have misread how this part works
though.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]