Do you mean porting existing cuda code away from Cuda to just some language
like python using pipes?  Or creating a solution that uses pipes to chain
mappers / reducers together, where the mappers and/or reducers invoke
Cuda kernels?  Or something else entirely?

You could do something like the second example, if you had a Cuda capable
card on each machine in the hadoop cluster.  And you might want to limit
you number of mappers / reducers running consecutively on each node to 1
per node, since the Cuda kernels execute serially (or at least the used
to...its been about 3 years since I've done any Cude coding).  That is
unless you had multiple graphics cards on each machine in the cluster.

Also, if you wanted to invoke the Cuda kernel in your reducer, you'd
probably need to spin through the entire Iterator of values and build up
your data structure in memory before you write it to the graphics card.
 This could possibly cause out of memory exceptions in your reducer if the
set of data sent to each reducer was too large to hold in memory.

Can you give an example of what you are interested in?

On Mon, Feb 13, 2012 at 9:02 AM, jem85 <[email protected]> wrote:

>
> I was wondering if anyone has had any experience with porting cuda code to
> hadoop pipes. Any assistance would be greatly appreciated.
>
> Thanks,
> --
> View this message in context:
> http://old.nabble.com/HADOOP-PIPES-with-CUDA-tp33316352p33316352.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 

Thanks,
John C

Reply via email to