[ 
https://issues.apache.org/jira/browse/MADLIB-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandish Jayaram updated MADLIB-1206:
------------------------------------
    Description: 
Mini-batch gradient descent is typically the algorithm of choice when training 
a neural network.

MADlib currently supports IGD, we may have to add extensions to include 
mini-batch as a solver for MLP. Other modules will continue to use the existing 
IGD that does not support mini-batching. Later JIRAs will move other modules 
over one at a time to use the new mini-batch GD.

  was:Having mini-batch GD in MLP might be more efficient. Can we add that 
capability?


> Add mini batch based gradient descent support to MLP
> ----------------------------------------------------
>
>                 Key: MADLIB-1206
>                 URL: https://issues.apache.org/jira/browse/MADLIB-1206
>             Project: Apache MADlib
>          Issue Type: New Feature
>          Components: Module: Neural Networks
>            Reporter: Nandish Jayaram
>            Assignee: Nandish Jayaram
>            Priority: Major
>             Fix For: v1.14
>
>
> Mini-batch gradient descent is typically the algorithm of choice when 
> training a neural network.
> MADlib currently supports IGD, we may have to add extensions to include 
> mini-batch as a solver for MLP. Other modules will continue to use the 
> existing IGD that does not support mini-batching. Later JIRAs will move other 
> modules over one at a time to use the new mini-batch GD.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to