In fact the idea is to run some part of the code on GPU as Patrick
described and extend the RDD structure so that it can also be distributed
on GPU's. The following article
http://www.wired.com/2013/06/andrew_ng/ describes a hybrid GPU/GPU
implementation (with MPI) that outperforms a
16, 000 cores cluster.


On Fri, Apr 11, 2014 at 3:53 PM, Patrick Grinaway <pgrina...@gmail.com>wrote:

> I've actually done it using PySpark and python libraries which call cuda
> code, though I've never done it from scala directly. The only major
> challenge I've hit is assigning tasks to gpus on multiple gpu machines.
>
> Sent from my iPhone
>
> > On Apr 11, 2014, at 8:38 AM, Jaonary Rabarisoa <jaon...@gmail.com>
> wrote:
> >
> > Hi all,
> >
> > I'm just wondering if hybrid GPU/CPU computation is something that is
> feasible with spark ? And what should be the best way to do it.
> >
> >
> > Cheers,
> >
> > Jaonary
>

Reply via email to