Github user manishamde commented on the pull request:

    https://github.com/apache/spark/pull/2607#issuecomment-61221341
  
    @jkbradley I cleaned up the public API based on our discussion. Going with 
a nested structure where we have to specify the weak learner parameters 
separately is cleaner but it puts the onus on us to write very good 
documentation.
    
    I am tempted to keep AbsoluteError and LogLoss as is with the appropriate 
caveats in the documentation. A regression tree with mean prediction at the 
terminal nodes it not the best approximation (as pointed out by the TreeBoost 
paper) but it's not a bad one either. After all, we are just making 
approximations of the gradient at each step. Moreover, other weak learning 
algorithms (for example LR) will be hard to tailor towards each specific loss 
function. Thoughts? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to