Le 21 mars 2012 01:39, Olivier Grisel <[email protected]> a écrit :
> Le 21 mars 2012 01:21, David Marek <[email protected]> a écrit :
>> Hi
>>
>> I think I was a little confused, I'll try to summarize what I
>> understand is needed:
>>
>> * the goal is to have multilayer perceptron with stochastic gradient
>> descent and maybe other learning algorithms
>> * basic implementation of sgd already exists
>> * existing implementation is missing some loss-functions, we need to
>> use some from sgd-fast and implement others
>> * existing implementation is slow, the final implementation should be
>> written in Cython
>>
>> Am I correct?
>
> It seems so.
>
>> I have experimented with Cython and realized that there will be work
>> needed to make the mlp faster.
>
> Preallocating the arrays to store that activations and ...

Hum I forgot to end that part of the sentence... I meant:

Preallocating the arrays to store the activations of the hidden and
ouput units outside of the main backprop loop is probably very
important too.

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to