GitHub user kaknikhil opened a pull request:

    https://github.com/apache/madlib/pull/272

    MLP: Add momentum and nesterov to gradient updates.

    JIRA: MADLIB-1210
    
    We refactored the minibatch code to separate out the momentum and model
    update functions. We initially were using the same function to get the
    loss and gradient for both igd and minibatch but the overhead of
    creating and updating the total_gradient_per_layer variable made igd
    slower. So we decided not to use the same code and are now calling the
    model and momentum update functions for both igd and minibatch
    
    Co-authored-by: Rahul Iyer<[email protected]>
    Co-authored-by: Jingyi Mei <[email protected]>

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/madlib/madlib feature/mlp_momentum

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/madlib/pull/272.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #272
    
----
commit 176e197f48732443ce658c5d02cefc8c45e7ff52
Author: Rahul Iyer <riyer@...>
Date:   2018-05-02T12:25:48Z

    MLP: Add momentum and nesterov to gradient updates.
    
    JIRA: MADLIB-1210
    
    We refactored the minibatch code to separate out the momentum and model
    update functions. We initially were using the same function to get the
    loss and gradient for both igd and minibatch but the overhead of
    creating and updating the total_gradient_per_layer variable made igd
    slower. So we decided not to use the same code and are now calling the
    model and momentum update functions for both igd and minibatch
    
    Co-authored-by: Rahul Iyer<[email protected]>
    Co-authored-by: Jingyi Mei <[email protected]>

----


---

Reply via email to