[ 
https://issues.apache.org/jira/browse/SPARK-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963648#comment-14963648
 ] 

Joseph K. Bradley commented on SPARK-4240:
------------------------------------------

This conversation slipped under my radar somehow; my apologies!

I think it'd be fine to copy the implementation of GBTs to spark.ml, especially 
if we want to restructure it to support TreeBoost.  As far as updating or 
replacing the spark.mllib implementation, I'd say: Ideally it would eventually 
be a wrapper for the spark.ml implementation, but we should focus on the 
spark.ml API and implementation for now, even if it means temporarily having a 
copy of the code.

I think it'd be hard to combine this work with generic boosting because 
TreeBoost relies on the fact that trees are a space-partitioning algorithm, but 
we could discuss feasibility if there is a way to leverage the same 
implementation.

[~dbtsai] expressed interest in this work, so I'll ping him here.

> Refine Tree Predictions in Gradient Boosting to Improve Prediction Accuracy.
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-4240
>                 URL: https://issues.apache.org/jira/browse/SPARK-4240
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>    Affects Versions: 1.3.0
>            Reporter: Sung Chung
>
> The gradient boosting as currently implemented estimates the loss-gradient in 
> each iteration using regression trees. At every iteration, the regression 
> trees are trained/split to minimize predicted gradient variance. 
> Additionally, the terminal node predictions are computed to minimize the 
> prediction variance.
> However, such predictions won't be optimal for loss functions other than the 
> mean-squared error. The TreeBoosting refinement can help mitigate this issue 
> by modifying terminal node prediction values so that those predictions would 
> directly minimize the actual loss function. Although this still doesn't 
> change the fact that the tree splits were done through variance reduction, it 
> should still lead to improvement in gradient estimations, and thus better 
> performance.
> The details of this can be found in the R vignette. This paper also shows how 
> to refine the terminal node predictions.
> http://www.saedsayad.com/docs/gbm2.pdf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to