On 2012-05-16, at 6:31 AM, Andreas Mueller <[email protected]> wrote:

> Btw, I am not sure theano is the best way to compute derivatives ;)

No? I would agree in the general case. However, in the case of MLPs and 
backprop, it's a use case for which Theano has been designed and heavily 
optimized. With it, it's very easy and quick to produce a correct MLP 
implementation (the deep learning tutorials contain one). 

It's *not* the best way to obtain a readable mathematical expression for the 
gradients, but it'll allow you to compute them easily/correctly, which makes it 
a useful thing to verify against.  I've done this a fair bit myself.

I've never had so much success with symbolic tools like Wolfram Alpha in 
situations involving lots of sums over indexed scalar quantities and whatnot, 
but perhaps I didn't try hard enough. 

Once the initial version is working, Theano will serve another purpose: as a 
speed benchmark to try and beat (or at least not be too far behind). :)

David

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to