[
https://issues.apache.org/jira/browse/SPARK-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Joseph K. Bradley updated SPARK-3162:
-------------------------------------
Target Version/s: 2.2.0 (was: 2.1.0)
> Train DecisionTree locally when possible
> ----------------------------------------
>
> Key: SPARK-3162
> URL: https://issues.apache.org/jira/browse/SPARK-3162
> Project: Spark
> Issue Type: Improvement
> Components: ML
> Reporter: Joseph K. Bradley
> Priority: Critical
>
> Improvement: communication
> Currently, every level of a DecisionTree is trained in a distributed manner.
> However, at deeper levels in the tree, it is possible that a small set of
> training data will be matched with any given node. If the node’s training
> data can fit on one machine’s memory, it may be more efficient to shuffle the
> data and do local training for the rest of the subtree rooted at that node.
> Note: It is possible that local training would become possible at different
> levels in different branches of the tree. There are multiple options for
> handling this case:
> (1) Train in a distributed fashion until all remaining nodes can be trained
> locally. This would entail training multiple levels at once (locally).
> (2) Train branches locally when possible, and interleave this with
> distributed training of the other branches.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]