Github user asolimando commented on a diff in the pull request:
https://github.com/apache/spark/pull/20632#discussion_r168925232
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/Node.scala ---
@@ -287,6 +292,41 @@ private[tree] class LearningNode(
}
}
+ /**
+ * Method testing whether a node is a leaf.
+ * @return true iff a node is a leaf.
+ */
+ private def isLeafNode(): Boolean = leftChild.isEmpty &&
rightChild.isEmpty
+
+ /** True iff the node should be a leaf. */
+ private lazy val shouldBeLeaf: Boolean = leafPredictions.size == 1
+
+ /**
+ * Returns the set of (leaf) predictions appearing in the subtree rooted
at the considered node.
+ * @return the set of (leaf) predictions appearing in the subtree rooted
at the given node.
+ */
+ private def leafPredictions: Set[Double] = {
--- End diff --
short-circuiting it is indeed possible and might avoid a lot of extra work,
I have modified
if (rightChild.isDefined) {
into
if (predBuffer.size <= 1 && rightChild.isDefined) {
I have also replaced shouldBeLeaf by turning leafPredictions into a lazy
val and test on it directly in the toNode method.
I think also readability has improved with this inlining.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]