[ 
https://issues.apache.org/jira/browse/IGNITE-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16754883#comment-16754883
 ] 

Artem Malykh commented on IGNITE-10955:
---------------------------------------

Current implementation of boosting appears to be more readable in the state as 
it is currently implemented. The problem with migration is that training in 
boosting is sequential while models composition is parallel with weighted 
aggregator. It is possible to extend sequential trainer composition to output 
some general composition with specification of this composition producer, but 
for me it seems more cumbersome than it is currently implemented in GDB. But 
work that has been done related to this feature and that maybe be helpful in 
future is extracted into a separate ticket (IGNITE-111222).

> [ML] Migrate boosting implementation to sequential trainers composition 
> combinator
> ----------------------------------------------------------------------------------
>
>                 Key: IGNITE-10955
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10955
>             Project: Ignite
>          Issue Type: Improvement
>          Components: ml
>    Affects Versions: 2.8
>            Reporter: Artem Malykh
>            Assignee: Artem Malykh
>            Priority: Major
>
> There are two trainers composition primitives which are used in other 
> ensemble training methods (Bagging and Stacking) implementation. To unify 
> implementation I suggest to rewrite impl of boosting using these composition 
> primitives as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to