On 02/05/2014 04:30 PM, Gael Varoquaux wrote:
> On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
>> I have been working with scikit-learn for three  pull requests - namely,
>> Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
>> Restricted Boltzmann Machines.
> Yes, you have been doing good work here!
+1
>> For the upcoming GSoC, I propose to ensure completing these three pull
>> requests. I also would develop Greedy layer-wise training algorithm for
>> deep learning, extending MLP to allow for more than one hidden layer,
>> where weights are initialized using Sparse Auto-encoders or RBM.
>> How will this suit for GSoC?
> The MLP is almost finished. I would hope that it would be finished before
> the GSoC. Actually, I was hoping that it could be finished before next
> release.
I'm also still hopeful there.
Unfortunately I will definitely be unable to mentor.

About pretraining: that is really out of style now ;)
Afaik "everybody" is now doing purely supervised training using drop-out.

Implementing pretrained deep nets should be fairly easy for a user if we 
support more than one hidden layer,
but just doing a pipeline of RBMs  / Autoencoders. As that is not that 
popular any more, I don't think we should put much effort there.

Deeper nets might be interesting but I'm quite sceptical about doing 
without GPUs.

On the other hand I think it should be possible for you to find a topic 
around these general concepts.
But I'm not sure who could mentor.

Cheers,
Andy

------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to