There's already an implementation of Multi-layer Perceptron using BackProp in 
Mahout that's pending review and integration into Mahout trunk.

See https://issues.apache.org/jira/browse/MAHOUT-1265.

Yexi has documented the approach and the design in the JIRA ticket.

Unless what you are proposing is more efficient than what's been done by Yexi, 
we are repeating ourselves here.

Just a thought.





On Saturday, October 19, 2013 5:31 AM, surabhi pandey <[email protected]> 
wrote:
 
Thanks for replying ted,as per our understanding you are trying to say this
needs to be done by the developer beforehand by using someĀ  dynamic
techniques and while testing user will assign these values based on the
optimal values we have generated using some dynamic techniques is it
correct?


On Sat, Oct 19, 2013 at 11:57 AM, Ted Dunning <[email protected]> wrote:

> That has been the practice in Mahout so far.
>
> Generally, a higher level learner is used to adjust those parameters, but
> it is important for testing purposes to expose them.
>
>
> On Sat, Oct 19, 2013 at 6:16 AM, Sushanth Bhat(MT2012147) <
> [email protected]> wrote:
>
> > Hi,
> >
> > We are implementing Multi-layer perceptron Neural networks using
> > back-propagation for Mahout. There are some parameters such as learning
> > rate, momentum, activation function, threshold error, number of layers,
> > number of neurons in hidden layers which are dependent upon the input
> data.
> > Are we suppose to make these parameters to be passed by user?
> >
> >
> > Thanks and regards,
> > Sushanth Bhat
> > IIIT-Bangalore
> >
>



-- 
Surabhi
http://www.linkedin.com/pub/surabhi-pandey/22/46/904

Reply via email to