I've thought about this idea, although I haven't tried it, but I think the right approach is to pick your granularity boundary and use Spark + JVM for large-scale parts of the algorithm, then use the gpgus API for number crunching large chunks at a time. No need to run the JVM and Spark on the GPU, which would make no sense anyway.
Here's another approach: http://www.cakesolutions.net/teamblogs/2013/02/13/akka-and-cuda/ dean On Fri, Apr 11, 2014 at 7:49 AM, Saurabh Jha <saurabh.jha.2...@gmail.com>wrote: > There is a scala implementation for gpgus (nvidia cuda to be precise). but > you also need to port mesos for gpu's. I am not sure about mesos. Also, the > current scala gpu version is not stable to be used commercially. > > Hope this helps. > > Thanks > saurabh. > > > > *Saurabh Jha* > Intl. Exchange Student > School of Computing Engineering > Nanyang Technological University, > Singapore > Web: http://profile.saurabhjha.in > Mob: +65 94663172 > > > On Fri, Apr 11, 2014 at 8:40 PM, Pascal Voitot Dev < > pascal.voitot....@gmail.com> wrote: > >> This is a bit crazy :) >> I suppose you would have to run Java code on the GPU! >> I heard there are some funny projects to do that... >> >> Pascal >> >> On Fri, Apr 11, 2014 at 2:38 PM, Jaonary Rabarisoa <jaon...@gmail.com>wrote: >> >>> Hi all, >>> >>> I'm just wondering if hybrid GPU/CPU computation is something that is >>> feasible with spark ? And what should be the best way to do it. >>> >>> >>> Cheers, >>> >>> Jaonary >>> >> >> > -- Dean Wampler, Ph.D. Typesafe @deanwampler http://typesafe.com http://polyglotprogramming.com