I'm also still hopeful there.
Unfortunately I will definitely be unable to mentor.
About pretraining: that is really out of style now ;)
Afaik "everybody" is now doing purely supervised training using drop-out.
Implementing pretrained deep nets should be fairly easy for a user if we
support more than one hidden layer,
but just doing a pipeline of RBMs / Autoencoders. As that is not that
popular any more, I don't think we should put much effort there.
Very True, it is all about having a stack of hidden layers :)
On the other hand I think it should be possible for you to find a topic
around these general concepts.
I am working on extending Extreme Learning Machine in my thesis, I think
that would be a good
It differs from Backpropagation in that, instead of running newton's
gradient descent for finding the weights minimizing the objective
function, it uses least-squares for the minimum. This means that it is
much faster. While we don't hear about ELMs much, it is in fact highly
cited.
*Extreme learning machine*: theory and applications
<http://www.sciencedirect.com/science/article/pii/S0925231206000385>
has 1285 citations and it got published in 2006; a large number
of citations for a fairly recent article. I believe scikit-learn
could add such an interesting learning algorithm along with its
variations (weighted ELMs, sequential ELMS, etc.)
Not to bandwagon extra things on this particular effort, but one
future consideration is that if scikit-learn supported multilayer
neural networks, and eventually multilayer convolutional neural
networks, it would become feasible to load pretrained nets ALA
OverFeat, DeCAF (recent papers with sweet results) and use them as
transforms.
I have implemented convolutional neural networks, but, like you said, it
is not feasible without GPUs. What you mentioned
sounds like a great opportunity to make NNs attractive for huge
datasets, but it seems there isn't someone willing to mentor for an NN
project.
Afaik "everybody" is now doing purely supervised training using drop-out.
Interesting how drop-out gained momentum when it is fairly recent :).
Since I have a basic version of the algorithm, I can work on it in the GSoC.
Chances are the Multi-layer perceptron PR would be completed before the
summer, so it won't be included in the GSoC proposal.
In order not to get into a scope creep, I compiled the following list of
algorithms to be proposed for the GSoC 2014,
1) Extreme Learning Machines
(http://sentic.net/extreme-learning-machines.pdf)
1a) Weighted Extreme Learning Machines
1b) Sequential Extreme Learning machines
2) Completing Sparse Auto-encoders
3) Extending MLP to support multiple hidden layers
3a) Deep Belief Network
Thank you very much!
------------------------------------------------------------------------------
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general