I believe building a backprop-learned MLP would be much better than using
the perceptron learning algo. In my experience, perceptron learning works
quite slow; happy to hear other thoughts.

On Wed, Feb 22, 2017 at 11:40 AM, Kazmi,Auon H <[email protected]> wrote:

> Hi Xixuan,
>
> THanks! Is starting with single layer perceptron (and thus using
> perceptron learning algo rather than SGD) a better idea?
>
>
>
> Auon
>
> ________________________________
> From: Feng, Xixuan (Aaron) <[email protected]>
> Sent: Tuesday, February 21, 2017 9:33:02 PM
> To: [email protected]
> Subject: Re: Regarding new Perceptron module
>
> Hi Auon,
>
> I have made some effort to try implementing multi-layer perceptron a few
> years back but never got the time to finish it. Latest commits on these 2
> branches has the history
> https://github.com/haying/madlib/commits/nn_design
> https://github.com/haying/madlib/commits/mlp
>
> I were using the convex programming framework in MADlib to implement it
> with stochastic gradient descent. You may or may not do it in the same way.
>
> Thanks,
> Feng, Xixuan (Aaron)
>
> On Wed, Feb 22, 2017 at 8:59 AM, Kazmi,Auon H <[email protected]> wrote:
>
> > Hi all,
> >
> > I am currently working on implementing Perceptron Learning Algorithm in
> > Madlib. This module would serve the purpose of a single neuron when
> > implementing any multi-layer neural network.
> >
> > Since, it involve state transitions, optimizations,etc.,  could you guys
> > suggest a module that could help a little in this context?
> >
> >
> >
> >
> >
> >
> > Regards,
> >
> > Auon
> >
>
>
>
> --
> - Aaron
>

Reply via email to