In general, large scale machine learning is I/O bound already.  There are
some things that would not be, but to really feed a GPU reasonably, data
almost has to be memory resident.

For more information on CUDA from Java, see (among others)
http://www.jcuda.de/

On Sun, Jul 8, 2012 at 4:04 PM, Sean Owen <[email protected]> wrote:

> More than that, Mahout is mostly Hadoop-based, which is well up the
> stack from Java. No there is nothing CUDA-related in the project. The
> closest thing are the pure Java non-Hadoop-based recommender pieces.
> But it is still far from CUDA.
>
> I think CUDA is intriguing since a lot of ML is a bunch of matrix math
> and GPUs are very good at vectorized math. I think a first step is to
> introduce proper JNI bindings for the big matrix math jobs and see how
> much that gains. If it's a lot, then CUDA-izing the JNI pieces is an
> interesting next step.
>
> On Sun, Jul 8, 2012 at 11:41 PM, mohsen jadidi <[email protected]>
> wrote:
> > Hello ,
> >
> > This is my first post here and I just started reading about Hadoop,
> Mahout
> > and all. I was wondering if there is any solution to use Mahout on
> parallel
> > computing on GPU (mainly CUDA) ? I know it's a bit wired question to ask
> > because cud a is C base and Mahout is Java base , but I just ask it as
> > curiosity! I think it would be a very cool combination to use both
> cluster
> > and local parallelisation !
> >
> > cheers,
> > --
> > Mohsen
>

Reply via email to