Hi @Hannes, how about using scipy.optimize.fmin_l_bfgs_b for optimizing 
the weights? I found it to be very efficient and fast (I even found it 
to be faster than MATLAB's minFunc), it's also widely used for neural 
networks type of optimization like in Prof. Andrew's courses and even in 
Deep Learning. Thanks.

On 6/24/2013 7:49 PM, Hannes Schulz wrote:
> Before things diverge completely, please also have a look at
>
> https://github.com/temporaer/scikit-learn/tree/mlperceptron
>
> and the discussions at
>
> https://github.com/larsmans/scikit-learn/pull/5
>
> where I tried to refactor larsmans' code and the gradient descent into 
> activity and weight layers, and also added some abstraction for 
> different ways of performing gradient descent. It's in cython, too, 
> and far from being finished, but may be a better starting point for 
> your efforts.
>
> cheers,
>
> -Hannes
>


------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to