[
https://issues.apache.org/jira/browse/MADLIB-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16381148#comment-16381148
]
Jingyi Mei commented on MADLIB-1206:
------------------------------------
Looks good to me.
For the summary table of mlp, the output will reflect whatever from the input
source table (of mlp), and in this way, it will be slightly different from
directly calling mlp without preprocessing. But user should aware this when
they run preprocessing first.
> Add mini batch based gradient descent support to MLP
> ----------------------------------------------------
>
> Key: MADLIB-1206
> URL: https://issues.apache.org/jira/browse/MADLIB-1206
> Project: Apache MADlib
> Issue Type: New Feature
> Components: Module: Neural Networks
> Reporter: Nandish Jayaram
> Assignee: Nandish Jayaram
> Priority: Major
> Fix For: v1.14
>
>
> Mini-batch gradient descent is typically the algorithm of choice when
> training a neural network.
> MADlib currently supports IGD, we may have to add extensions to include
> mini-batch as a solver for MLP. Other modules will continue to use the
> existing IGD that does not support mini-batching. Later JIRAs will move other
> modules over one at a time to use the new mini-batch GD.
> Related JIRA that will pre-process the input data to be consumed by
> mini-batch isĀ https://issues.apache.org/jira/browse/MADLIB-1200
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)