On 11/04/2011 03:42 PM, Alexandre Passos wrote:
> On Fri, Nov 4, 2011 at 10:34, Lars Buitinck <[email protected]> wrote:
>> 2011/11/4 Alexandre Passos <[email protected]>:
>>> I have a question: why not just use Theano for this? I doubt that we
>>> can write neural network code that's as fast as their automatically
>>> generated code.
>> Would that mean an extra run-time dependency?
>
> Yes, as theano needs a compiler (gcc or nvcc if you want to use cuda)
> available at run time, but even still it's faster even than most
> hand-coded implementations of neural networks. James Bergstra reads
> this list occasionally, and he's one of the main people behind theano,
> so he can give more info here.
>
>
I think sklearn does not aim at beating cuda implementations.
For using theano: that's a huge and imho unnecessary dependency.
For a simple mlp, I think theano will not beat a hand implemented version.
Afaik, torch7 is faster than theano for cnns and mlps and there
is no compilation of algorithms there. If you want to vary your
implementation
a lot and want to do fancy things, theano is probably faster.
But I thought more about an easy to use classifier.



------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to