Hi Michal.
Please direct all such questions to the sklearn mailing list or 
stackoverflow.
I doubt there will be any integration of GPU computation into numpy in 
the near future (or probably ever).
There is also no plan to integrate GPU acceleration into scikit-learn, 
mostly because it introduces a lot of dependencies.

If your problem is kernel SVMs, GPUs don't really help much anyhow. If 
your dataset is large, I would suggest using the kernel approximation 
module.
This example illustrates the use and the gains in speed: 
http://scikit-learn.org/dev/auto_examples/plot_kernel_approximation.html#example-plot-kernel-approximation-py

Best,
Andy


On 05/05/2014 11:20 PM, Michal Sourek wrote:
> Hi Andreas,
> with regards to your long track of Scikit-learn development,
> let me raise one issue from code-execution point of view.
>
> Scikit has many marvelous tools.
>
> My primary concern is related to Supervised Learning of SVM-based 
> Classifier.
>
> In the initial coarse optimisation scan over a non-convex problem,
> a GridSearchCV() use on the real-world dataset
> has rather long code-execution times,
> that even off-peak shifts scheduled onto available CPU/Cores 
> resource-pool,
> available over weekend DataCenter shifts,
> cannot handle.
>
> Would indeed appreciate your comments, ideas or directions about the 
> possible scikit-learn acceleration strategies available or 
> possible-in-principle in this context.
>
> Are there, upon your knowledge, any R&D activities running in this 
> direction?
>
> Have not found any promising candidates anywhere near the Scikit-learn 
> framework. Also not aware about any industry-group spin-off aiming 
> onto the gap between the raw, low level GPU/CUDA resources to bridge 
> the very distance up to the Scikit-learn tools.
>
> Thought of a primitive mezzanine-approach for allowing minimum 
> Scikit-learn integration efforts,
> i.e. to add the GPU-enabling layer into the Numpy, supposing all the 
> Scikit-learn code uses vectorised Numpy services,
> to allow Numpy to call transparently GPU-services in case 
> GPU-resources are detected on the <localhost>
> and to bypass the calls to CPU on such <localhost> that has no GPU 
> resources available, which seems to be already available in iPython, 
> if I remember well,
> but
> some GPU-grid approach is a principally better architecture for this.
>
> Appreciate your time, you´ve spent Andreas on reading this -- thank you.
>
> Remaining with respect,
> Michal Sourek


------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to