[ 
https://issues.apache.org/jira/browse/MADLIB-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan reopened MADLIB-1206:
-------------------------------------
      Assignee: Rahul Iyer  (was: Nandish Jayaram)

When mini-batch preprocessor is run with grouping, MLP should only support 
exactly the same grouping.  Currently MLP with mini-batching will run with no 
groups or any types of groups which will give erroneous results.

One approach is to follow the same as 'independent_varname' and 
'dependent_varname' for mlp param names when using mini-batching.  The grouping 
one could be similarly hardcoded to 'grouping_col'

let me know if that seems like a reasonable approach.

 

> Add mini batch based gradient descent support to MLP
> ----------------------------------------------------
>
>                 Key: MADLIB-1206
>                 URL: https://issues.apache.org/jira/browse/MADLIB-1206
>             Project: Apache MADlib
>          Issue Type: New Feature
>          Components: Module: Neural Networks
>            Reporter: Nandish Jayaram
>            Assignee: Rahul Iyer
>            Priority: Major
>             Fix For: v1.14
>
>
> Mini-batch gradient descent is typically the algorithm of choice when 
> training a neural network.
> MADlib currently supports IGD, we may have to add extensions to include 
> mini-batch as a solver for MLP. Other modules will continue to use the 
> existing IGD that does not support mini-batching. Later JIRAs will move other 
> modules over one at a time to use the new mini-batch GD.
> Related JIRA that will pre-process the input data to be consumed by 
> mini-batch is https://issues.apache.org/jira/browse/MADLIB-1200



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to