Github user sachingoel0101 commented on the pull request: https://github.com/apache/flink/pull/710#issuecomment-114173765 The fundamental idea for a scalable decision tree algorithm is to reduce the number of splits required to be checked at every node. Ideally, we'd check for every value of every attribute, which would lead to a N*D number of checks at every node [N is the number of instances and D is the dimensionality]. To bring this down, we convert the values for each attribute into a probability distribution using a histogram. After this, we can perform as many splits as we want, not depending on the actual number of training instances. Aside from this, there is nothing special in the paper which is a diversion from the standard decision tree algorithm. I'll update the branch soon to incorporate your comments.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---