In fact the idea is to run some part of the code on GPU as Patrick
described and extend the RDD structure so that it can also be distributed
on GPU's. The following article
http://www.wired.com/2013/06/andrew_ng/ describes a hybrid GPU/GPU
implementation (with MPI) that outperforms a
16, 000 cores
I've actually done it using PySpark and python libraries which call cuda code,
though I've never done it from scala directly. The only major challenge I've
hit is assigning tasks to gpus on multiple gpu machines.
Sent from my iPhone
> On Apr 11, 2014, at 8:38 AM, Jaonary Rabarisoa wrote:
>
>
On Fri, Apr 11, 2014 at 3:34 PM, Dean Wampler wrote:
> I've thought about this idea, although I haven't tried it, but I think the
> right approach is to pick your granularity boundary and use Spark + JVM for
> large-scale parts of the algorithm, then use the gpgus API for number
> crunching large
I've thought about this idea, although I haven't tried it, but I think the
right approach is to pick your granularity boundary and use Spark + JVM for
large-scale parts of the algorithm, then use the gpgus API for number
crunching large chunks at a time. No need to run the JVM and Spark on the
GPU,
There is a scala implementation for gpgus (nvidia cuda to be precise). but
you also need to port mesos for gpu's. I am not sure about mesos. Also, the
current scala gpu version is not stable to be used commercially.
Hope this helps.
Thanks
saurabh.
*Saurabh Jha*
Intl. Exchange Student
School o
This is a bit crazy :)
I suppose you would have to run Java code on the GPU!
I heard there are some funny projects to do that...
Pascal
On Fri, Apr 11, 2014 at 2:38 PM, Jaonary Rabarisoa wrote:
> Hi all,
>
> I'm just wondering if hybrid GPU/CPU computation is something that is
> feasible with sp