Hi Andy,
You are absolutely right, I'm proposing something which I'm not familiar
with. It would be very difficult to learn and integrate it to scikit in
a two months time. However, the comments were more than helpful as to
what I'm facing :).
Thanks a lot!
yours truly,
--Issam
On 5/3/2013 11:12 PM, Andreas Mueller wrote:
Hi Issam.
Sorry to break this to you, but sklearn will not add any GPU code in
the near future.
Also, we will probably not use numba in quite a while.
I think it is possible that we want to replace some cython with numba,
but I don't see this happening this year.
For your proposal: actually Deep Boltzmann machines are strictly more
general than Deep Belief Networks.
If you don't know this, I'm not sure you know enough about these
algorithms to implement them.
Also, stacked denoising autoencoders are synonymous with deep
autoencoders (modulo modifying the input).
You write "Learn and implement GPU accelerated Python techniques (eg.
shared variables) to improve speed".
Are you talking about theano there? So you want to add a
theano-dependency to sklearn?
No, sorry, that won't happen either.
I don't think there is any point in submitting your proposal.
Sorry.
Andy
------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general