[ 
https://issues.apache.org/jira/browse/FLINK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15585342#comment-15585342
 ] 

Till Rohrmann commented on FLINK-1807:
--------------------------------------

This could work as a workaround but I think it's not the proper solution to the 
problem. Instead we should try to fix the static path problem. I.e. we could 
introduce an operator flag which allows to mark an operator as being dynamic. 
When determining the static path not only the partial solution dataset but also 
the sampling operator which actually lies on a static path, should be marked as 
dynamic. Then for each iteration, we have to start executing all dynamic 
operators.

> Stochastic gradient descent optimizer for ML library
> ----------------------------------------------------
>
>                 Key: FLINK-1807
>                 URL: https://issues.apache.org/jira/browse/FLINK-1807
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>
> Stochastic gradient descent (SGD) is a widely used optimization technique in 
> different ML algorithms. Thus, it would be helpful to provide a generalized 
> SGD implementation which can be instantiated with the respective gradient 
> computation. Such a building block would make the development of future 
> algorithms easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to